text
stringlengths 270
6.81k
|
---|
free R-module for each n. Then the cross product map H k(X; R) ⊗ H (Y ; R) × H n(X × Y ; R) k+=n is an isomorphism for every n, for every finite cell complex X. It follows from the five lemma that the same holds if we have a relative complex (Y, A) instead of just Y. For convenience, we will write H ∗(X; R)⊗H ∗(Y ; R) for the graded R-module which in grade n is given by k+=n H k(X; R) ⊗ H (Y ; R). Then K¨unneth says the map given by H ∗(X; R) ⊗ H ∗(Y ; R) × H ∗(X × Y ; R) is an isomorphism of graded rings. Proof. Let F n(−) = k+=n H k(−; R) ⊗ H (Y ; R). We similarly define Gn(−) = H n(− × Y ; R). We observe that for each X, the cross product gives a map × : F n(X) → Gn(X), and, crucially, we know ×∗ : F n(∗) → Gn(∗) is an isomorphism, since F n(∗) ∼= Gn(∗) ∼= H n(Y ; R). The strategy is to show that F n(−) and Gn(−) have the same formal structure as cohomology and agree on a point, and so must agree on all finite cell complexes. It is clear that both F n and Gn are homotopy invariant, because they are built out of homotopy invariant things. 52 9 K¨unneth theorem and universal coefficients theorem III Algebraic Topology We now want to define the cohomology of pairs. This is easy. We define F n(X, A) = i+j=n H i(X, A; R) ⊗ H j(Y ; R) Gn(X, A) = H n(X × Y, A × Y ; R). Again, the relative cup product gives us a relative cross product,
|
which gives us a map F n(X, A) → Gn(X, A). It is immediate Gn has a long exact sequence associated to (X, A) given by the usual long exact sequence of (X × Y, A × Y ). We would like to say F has a long exact sequence as well, and this is where our hypothesis comes in. If H ∗(Y ; R) is a free R-module, then we can take the long exact sequence of (X, A) · · · H n(A; R) ∂ H n(X, A; R) H n(X; R) H n(A; R) · · ·, and then tensor with H j(Y ; R). This preserves exactness, since H j(Y ; R) ∼= Rk for some k, so tensoring with H j(Y ; R) just takes k copies of this long exact sequence. By adding the different long exact sequences for different j (with appropriate translations), we get a long exact sequence for F. We now want to prove K¨unneth by induction on the number of cells and the dimension at the same time. We are going to prove that if X = X ∪f Dn for some Sn−1 → X, and × : F (X ) → G(X ) is an isomorphism, then × : F (X) → G(X) is also an isomorphism. In doing so, we will assume that the result is true for attaching any cells of dimension less than n. Suppose X = X ∪f Dn for some f : Sn−1 → X. We get long exact sequences F ∗−1(X ) F ∗(X, X ) F ∗(X) F ∗(X ) F ∗+1(X, X ) ×∼ × × ×∼ × G∗−1(X ) G∗(X, X ) G∗(X) G∗(X ) G∗+1(X, X ) Note that we need to manually check that the boundary maps ∂ commute with the cross product, since this is not induced by maps of spaces, but we will not do it here. Now by the five lemma, it suffices to show that the maps on the relative cohomology × :
|
F n(X, X ) → Gn(X, X ) is an isomorphism. We now notice that F ∗(−) and G∗(−) have excision. Since (X, X ) is a good pair, we have a commutative square F ∗(Dn, ∂Dn) × G∗(Dn, ∂Dn) F ∗(X, X ) × G∗(X, X ) ∼ ∼ So we now only need the left-hand map to be an isomorphism. We look at the long exact sequence for (Dn, ∂Dn)! F ∗−1(∂Dn) F ∗(Dn, ∂Dn) F ∗(Dn) F ∗(∂Dn) F ∗+1(Dn, ∂Dn) ×∼ ×∼ × ×∼ ×× G∗−1(∂Dn) G∗(Dn, ∂Dn) G∗(Dn) G∗(∂Dn) G∗+1(Dn, ∂Dn) 53 9 K¨unneth theorem and universal coefficients theorem III Algebraic Topology But now we know the vertical maps for Dn and ∂Dn are isomorphisms — the ones for Dn are because they are contractible, and we have seen the result of ∗ already; whereas the result for ∂Dn follows by induction. So we are done. The conditions of the theorem require that H n(Y ; R) is free. When will this hold? One important example is when R is actually a field, in which case all modules are free. Example. Consider H ∗(S1, Z). We know it is Z in ∗ = 0, 1, and 0 elsewhere. Let’s call the generator of H 0(S1, Z) “1”, and the generator of H 1(S1, Z) as x. Then we know that x x = 0 since there isn’t anything in degree 2. So we know H ∗(S1, Z) = Z[x]/(x2). Then K¨unneth’s theorem tells us that H ∗(T n, Z) ∼=
|
H ∗(S1; Z)⊗n, where T n = (S1)n is the n-torus, and this is an isomorphism of rings. So this is H ∗(T n, Z) ∼= Z[x1, · · ·, xn]/(x2 i, xixj + xjxi), using the fact that xi, xj have degree 1 so anti-commute. Note that this has an interesting cup product! This ring is known as the exterior algebra in n generators. Example. Let f : Sn → T n be a map for n > 1. We claim that this induces the zero map on the nth cohomology. We look at the induced map on cohomology: f ∗ : H n(T n; Z) H n(Sn, Z). Looking at the presentation above, we know that H n(T n, Z) is generated by x1 · · · xn, and f ∗ sends it to (f ∗x1) · · · (f ∗xn). But f ∗xi ∈ H 1(Sn, Z) = 0 for all n > 1. So f ∗(x1 · · · xn) = 0. Note that the statement does not involve cup products at all, but it would be much more difficult to prove this without using cup products! We are now going to prove another useful result. Theorem (Universal coefficients theorem for (co)homology). Let R be a PID and M an R-module. Then there is a natural map H∗(X; R) ⊗ M → H∗(X; M ). If H∗(X; R) is a free module for each n, then this is an isomorphism. Similarly, there is a natural map H ∗(X; M ) → HomR(H∗(X; R), M ),, which is an isomorphism again if H ∗(X; R) is free. 54 9 K¨unneth theorem and universal coefficients theorem III Algebraic Topology In particular, when R = Z, then an R-module is just an abelian group, and this tells us how homology and cohomology with coefficients in an abelian group
|
relate to the usual homology and cohomology theory. Proof. Let Cn be Cn(X; R) and Zn ⊆ Cn be the cycles and Bn ⊆ Zn the boundaries. Then there is a short exact sequence 0 Zn i Cn g Bn−1 0, and Bn−1 ≤ Cn−1 is a submodule of a free R-module, and is free, since R is a PID. So by picking a basis, we can find a map s : Bn−1 → Cn such that g ◦ s = idBn−1. This induces an isomorphism i ⊕ s : Zn ⊕ Bn−1 ∼ Cn. Now tensoring with M, we obtain 0 Zn ⊗ M Cn ⊗ M Bn−1 ⊗ M 0, which is exact because we have Cn ⊗ M ∼= (Zn ⊕ Bn−1) ⊗ M ∼= (Zn ⊗ M ) ⊕ (Bn−1 ⊗ M ). So we obtain a short exact sequence of chain complexes 0 (Zn ⊗ M, 0) (Cn ⊗ M, d ⊗ id) (Bn−1 ⊗ M, 0) 0, which gives a long exact sequence in homology: · · · Bn ⊗ M Zn ⊗ M Hn(X; M ) Bn−1 ⊗ M · · · We’ll leave this for a while, and look at another short exact sequence. By definition of homology, we have a long exact sequence 0 Bn Zn Hn(X; R) 0. As Hn(X; R) is free, we have a splitting t : Hn(X; R) → Zn, so as above, tensoring with M preserves exactness, so we have 0 Bn ⊗ M Zn ⊗ M Hn(X; R) ⊗ M 0. Hence we know that Bn ⊗ M → Zn ⊗ M is injective. So our previous long exact sequence breaks up to 0 Bn ⊗ M Zn ⊗ M Hn(X; M ) 0. Since we have two short exact sequence
|
with first two terms equal, the last terms have to be equal as well. The cohomology version is similar. 55 10 Vector bundles III Algebraic Topology 10 Vector bundles 10.1 Vector bundles We now end the series of random topics, and work on a more focused topic. We are going to look at vector bundles. Intuitively, a vector bundle over a space X is a continuous assignment of a vector space to each point in X. In the first section, we are just going to look at vector bundles as topological spaces. In the next section, we are going to look at homological properties of vector bundles. Definition (Vector bundle). Let X be a space. A (real) vector bundle of dimension d over X is a map π : E → X, with a (real) vector space structure on each fiber Ex = π−1({x}), subject to the local triviality condition: for each x ∈ X, there is a neighbourhood U of x and a homeomorphism ϕ : E|U = π−1(U ) → U × Rd such that the following diagram commutes E|U ϕ π U U × Rd, π1 and for each y ∈ U, the restriction ϕ|Ey : Ey → {y} × Rd is a linear isomorphism for each y ∈ U. This maps is known as a local trivialization. We have an analogous definition for complex vector bundles. Definition (Section). A section of a vector bundle π : E → X is a map s : X → E such that π ◦ s = id. In other words, s(x) ∈ Ex for each x. Definition (Zero section). The zero section of a vector bundle is s0 : X → E given by s0(x) = 0 ∈ Ex. Note that the composition E π X s0 E is homotopic to the identity map on idE, since each Ex is contractible. One important operation we can do on vector bundles is pullback : Definition (Pullback of vector bundles). Let π : E → X be a vector bundle, and f : Y → X a map. We define the pullback f ∗E = {(y
|
, e) ∈ Y × E : f (y) = π(e)}. This has a map f ∗π : f ∗E → Y given by projecting to the first coordinate. The vector space structure on each fiber is given by the identification (f ∗E)y = Ef (y). It is a little exercise in topology to show that the local trivializations of π : E → X induce local trivializations of f ∗π : f ∗E → Y. Everything we can do on vector spaces can be done on vector bundles, by doing it on each fiber. 56 10 Vector bundles III Algebraic Topology Definition (Whitney sum of vector bundles). Let π : E → F and ρ : F → X be vector bundles. The Whitney sum is given by E ⊕ F = {(e, f ) ∈ E × F : π(e) = ρ(f )}. This has a natural map π ⊕ ρ : E ⊕ F → X given by (π ⊕ ρ)(e, f ) = π(e) = ρ(f ). This is again a vector bundle, with (E ⊕ F )x = Ex ⊕ Fx and again local trivializations of E and F induce one for E ⊕ F. Tensor products can be defined similarly. Similarly, we have the notion of subbundles. Definition (Vector subbundle). Let π : E → X be a vector bundle, and F ⊆ E a subspace such that for each x ∈ X there is a local trivialization (U, ϕ) E|U ϕ π U U × Rd, π1 such that ϕ takes F |U to U × Rk, where Rk ⊆ Rd. Then we say F is a vector sub-bundle. Definition (Quotient bundle). Let F be a sub-bundle of E. Then E/F, given by the fiberwise quotient, is a vector bundle and is given by the quotient bundle. We now look at one of the most important example of a vector bundle. In some sense, this is the “universal
|
” vector bundle, as we will see later. Example (Grassmannian manifold). We let X = Grk(Rn) = {k-dimensional linear subgroups of Rn}. To topologize this space, we pick a fixed V ∈ Grk(Rn). Then any k-dimensional subspace can be obtained by applying some linear map to V. So we obtain a surjection GLn(R) → Grk(Rn) M → M (V ). So we can given Grk(Rn) the quotient (final) topology. For example, Gr1(Rn+1) = RPn. Now to construct a vector bundle, we need to assign a vector space to each point in X. But a point in Grk(Rn) is a vector space, so we have an obvious definition E = {(V, v) ∈ Grk(Rn) × Rn : v ∈ V }. This has the evident projection π : E → X given by the first projection. We then have EV = V. 57 10 Vector bundles III Algebraic Topology To see that this is a vector bundle, we have to check local triviality. We fix a V ∈ Grk(Rn), and let U = {W ∈ Grk(Rn) : W ∩ V ⊥ = {0}}. We now construct a map ϕ : E|U → U × V ∼= U × Rk by mapping (W, w) to (W, prV (w)), where prV : Rn → V is the orthogonal projection. Now if w ∈ U, then prV (w) = 0 since W ∩ V ⊥ = {0}. So ϕ is a homeomor- phism. We call this bundle γR k,n → Grk(Rn). In the same way, we can get a canonical complex vector bundle γC k,n → Grk(Cn). Example. Let M be a smooth d-dimensional manifold, then it naturally has a d-dimensional tangent bundle π : T M → M with (T M )|x = TxM. If M ⊆ N is a smooth submanifold, with i the inclusion map, then
|
T M is a subbundle of i∗T N. Note that we cannot say T M is a smooth subbundle of T N, since they have different base space, and thus cannot be compared without pulling back. The normal bundle of M in N is νM ⊆N = i∗T N T M. Here is a theorem we have to take on faith, because proving it will require some differential geometry. Theorem (Tubular neighbourhood theorem). Let M ⊆ N be a smooth submanifold. Then there is an open neighbourhood U of M and a homeomorphism νM ⊆N → U, and moreover, this homeomorphism is the identity on M (where we view M as a submanifold of νM ⊆N by the image of the zero section). This tells us that locally, the neighbourhood of M in N looks like νM ⊆N. We will also need some results from point set topology: Definition (Partition of unity). Let X be a compact Hausdorff space, and {Uα}α∈I be an open cover. A partition of unity subordinate to {Uα} is a collection of functions λα : X → [0, ∞) such that (i) supp(λα) = {x ∈ X : λα(x) > 0} ⊆ Uα. (ii) Each x ∈ X lies in finitely many of these supp(λα). (iii) For each x, we have α∈I λα(x) = 1. Proposition. Partitions of unity exist for any open cover. You might have seen this in differential geometry, but this is easier, since we do not require the partitions of unity to be smooth. Using this, we can prove the following: Lemma. Let π : E → X be a vector bundle over a compact Hausdorff space. Then there is a continuous family of inner products on E. In other words, there is a map E ⊗ E → R which restricts to an inner product on each Ex. 58 10 Vector bundles III Algebraic Topology Proof. We notice that every trivial bundle has an inner product, and since every bundle is locally trivial, we can patch these
|
up using partitions of unity. Let {Uα}α∈I be an open cover of X with local trivializations ϕα : E|Uα → Uα × Rd. The inner product on Rd then gives us an inner product on E|Uα, say ·, · α. We let λα be a partition of unity associated to {Uα}. Then for u ⊗ v ∈ E ⊗ E, we define u, v = λα(π(u))u, vα. α∈I Now if π(u) = π(v) is not in Uα, then we don’t know what we mean by u, vα, but it doesn’t matter, because λα(π(u)) = 0. Also, since the partition of unity is locally finite, we know this is a finite sum. It is then straightforward to see that this is indeed an inner product, since a positive linear combination of inner products is an inner product. Similarly, we have Lemma. Let π : E → X be a vector bundle over a compact Hausdorff space. Then there is some N such that E is a vector subbundle of X × RN. Proof. Let {Uα} be a trivializing cover of X. Since X is compact, we may wlog assume the cover is finite. Call them U1, · · ·, Un. We let ϕi : E|Ui → Ui × Rd. We note that on each patch, E|Ui embeds into a trivial bundle, because it is a trivial bundle. So we can add all of these together. The trick is to use a partition of unity, again. We define fi to be the composition E|Ui ϕi Ui × Rd π2 Rd. Then given a partition of unity λi, we define f : E → X × (Rd)n v → (π(v), λ1(π(v))f1(v), λ2(π(v))f2(v), · · ·, λn(π(v))fn(v)). We see that this is injective. If v, w belong to different fibers,
|
then the first coordinate distinguishes them. If they are in the same fiber, then there is some Ui with λi(π(u)) = 0. Then looking at the ith coordinate gives us distinguishes them. This then exhibits E as a subbundle of X × Rn. Corollary. Let π : E → X be a vector bundle over a compact Hausdorff space. Then there is some p : F → X such that E ⊕ F ∼= X × Rn. In particular, E embeds as a subbundle of a trivial bundle. Proof. By above, we can assume E is a subbundle of a trivial bundle. We can then take the orthogonal complement of E. 59 10 Vector bundles III Algebraic Topology Now suppose again we have a vector bundle π : E → X over a compact Hausdorff X. We can then choose an embedding E ⊆ X × RN, and then we get a map fπ : X → Grd(RN ) sending x to Ex ⊆ RN. Moreover, if we pull back the tautological bundle along fπ, then we have πγR f ∗ k,N ∼= E. So every vector bundle is the pullback of the canonical bundle γR k,N over a Grassmannian. However, there is a slight problem. Different vector bundles will require different N ’s. So we will have to overcome this problem if we want to make a statement of the sort “a vector bundle is the same as a map to a Grassmannian”. The solution is to construct some sort of Grd(R∞). But infinite-dimensional vector spaces are weird. So instead we take the union of all Grd(RN ). We note that for each N, there is an inclusion Grd(RN ) → Grd(Rn+1), which induces an inclusion of the canonical bundle. We can then take the union to obtain Grd(R∞) with a canonical bundle γR d. Then the above result shows that each vector bundle over X is the pullback of the canoncial bundle γR d along some map f : X → Grd(R∞). Note that a vector bundle over X does not uniquely specify a map X →
|
Grd(R∞), as there can be multiple embeddings of X into the trivial bundle X × RN. Indeed, if we wiggle the embedding a bit, then we can get a new bundle. So we don’t have a coorespondence between vector bundles π : E → X and maps fπ : X → Grd(R∞). The next best thing we can hope for is that the “wiggle the embedding a bit” is all we can do. More precisely, two maps f, g : X → Grd(R∞) pull back isomorphic vector bundles if and only if they are homotopic. This is indeed true: Theorem. There is a correspondence homotopy classess of maps f : X → Grd(R∞) d-dimensional vector bundles π : E → X [f ] [fπ] f ∗γR d π The proof is mostly technical, and is left as an exercise on the example sheet. 10.2 Vector bundle orientations We are now going to do something that actually involves algebraic topology. Unfortunately, we have to stop working with arbitrary bundles, and focus on orientable bundles instead. So the first thing to do is to define orientability. What we are going to do is to come up with a rather refined notion of orientability. For each commutative ring R, we will have the notion of Rorientability. The strength of this condition will depend on what R is — any vector bundle is F2 orientable, while Z-orientability is the strongest — if a vector bundle is Z-orientable, then it is R-orientable for all R. 60 10 Vector bundles III Algebraic Topology While there isn’t really a good geometric way to think about general Rorientability, the reason for this more refined notion is that whenever we want things to be true for (co)homology with coefficients in R, then we need the bundle to be R-orientable. Let’s begin. Let π : E → X be a vector bundle of dimension d. We write E# = E \ s0(X). We now look at the relative homology groups H i(Ex,
|
E# x ; R), where E# x = Ex \ {0}. We know Ex is a d-dimensional vector space. So we can choose an isomorphism Ex → Rd. So after making this choice, we know that H i(Ex, E# x ; R) ∼= H i(Rd, Rd \ {0}; R) = R i = d 0 otherwise However, there is no canonical generator of H d(Ex, E# we had to pick an isomorphism Ex ∼= Rd. x ; R) as an R-module, as Definition (R-orientation). A local R-orientation of E at x ∈ X is a choice of R-module generator εx ∈ H d(Ex, E# x ; R). An R-orientation is a choice of local R-orientation {εx}x∈X which are compatible in the following way: if U ⊆ X is open on which E is trivial, and x, y ∈ U, then under the homeomorphisms (and in fact linear isomorphisms): Ex Ey hx E|U ϕα ∼= U × Rd π2 Rd hy the map y ◦ (h−1 h∗ x )∗ : H d(Ex, E# x ; R) → H d(Ey, E# y ; R) sends εx to εy. Note that this definition does not depend on the choice of ϕU, because we used it twice, and they cancel out. It seems pretty horrific to construct an orientation. However, it isn’t really that bad. For example, we have Lemma. Every vector bundle is F2-orientable. Proof. There is only one possible choice of generator. In the interesting cases, we are usually going to use the following result to construct orientations: 61 10 Vector bundles III Algebraic Topology Lemma. If {Uα}α∈I is a family of covers such that for each α, β ∈ I, the homeomorphism (Uα ∩ Uβ) × Rd ϕα ∼= E|Uα∩Uβ ∼= ϕβ (Uα ∩ Uβ) × Rd gives an orientation preserving map from (Uα ∩ Uβ) × Rd to itself, i.e
|
. has a positive determinant on each fiber, then E is orientable for any R. Note that we don’t have to check the determinant at each point on Uα ∩ Uβ. By continuity, we only have to check it for one point. Proof. Choose a generator u ∈ H d(Rd, Rd \ {0}; R). Then for x ∈ Uα, we define εx by pulling back u along Ex E|Uα ϕα Uα × Rd π2 Rd. (†α) If x ∈ Uβ as well, then the analogous linear isomorphism †α differs from †β by post-composition with a linear map L : Rd → Rd of positive determinant. We now use the fact that any linear map of positive determinant is homotopic to the identity. Indeed, both L and id lies in GL+ d (R), a connected group, and a path between them will give a homotopy between the maps they represent. So we know (†α) is homotopic to (†β). So they induce the same maps on cohomology classes. Now if we don’t know that the maps have positive determinant, then (†α) and (†β) might differ by a sign. So in any ring R where 2 = 0, we know every vector bundle is R-orientable. This is a generalization of the previous result for F2 we had. 10.3 The Thom isomorphism theorem We now get to the main theorem about vector bundles. Theorem (Thom isomorphism theorem). Let π : E → X be a d-dimensional vector bundle, and {εx}x∈X be an R-orientation of E. Then (i) H i(E, E#; R) = 0 for i < d. (ii) There is a unique class uE ∈ H d(E, E#; R) which restricts to εx on each fiber. This is known as the Thom class. (iii) The map Φ given by the composition H i(X; R) π∗ H i(E; R) −uE H i+d(E, E#; R) is an isomorphism. Note that (i) follows from
|
(iii), since H i(X; R) = 0 for i < 0. Before we go on and prove this, we talk about why it is useful. Definition (Euler class). Let π : E → X be a vector bundle. We define the Euler class e(E) ∈ H d(X; R) by the image of uE under the composition H d(E, E#; R) H d(E; R) s∗ 0 H d(X; R). 62 10 Vector bundles III Algebraic Topology This is an example of a characteristic class, which is a cohomology class related to an oriented vector bundle that behaves nicely under pullback. More precisely, given a vector bundle π : E → X and a map f : Y → X, we can form a pullback ˆf f f ∗E f ∗π Y E. π X ∼= Ef (y), an R-orientation for E Since we have a fiberwise isomorphism (f ∗E)y induces one for f ∗E, and we know f ∗(uE) = uf ∗E by uniqueness of the Thom class. So we know e(f ∗(E)) = f ∗e(E) ∈ H d(Y ; R). Now the Euler class is a cohomology class of X itself, so we can use the Euler class to compare and contrast different vector bundles. How can we think of the Euler class? It turns out the Euler class gives us an obstruction to the vector bundle having a non-zero section. Theorem. If there is a section s : X → E which is nowhere zero, then e(E) = 0 ∈ H d(X; R). Proof. Notice that any two sections of E → X are homotopic. So we have e ≡ s∗ 0uE = s∗uE. But since uE ∈ H d(E, E#; R), and s maps into E#, we have s∗uE. Perhaps more precisely, we look at the long exact sequence for the pair (E, E#), giving the diagram H d(E, E#; R) H d(E; R) H d(E#; R) s
|
∗ 0 s∗ H d(X; R) Since s and s0 are homotopic, the diagram commutes. Also, the top row is exact. So uE ∈ H d(E, E#; R) gets sent along the top row to 0 ∈ H d(E#; R), and thus s∗ sends it to 0 ∈ H d(X; R). But the image in H d(X; R) is exactly the Euler class. So the Euler class vanishes. Now cohomology is not only a bunch of groups, but also a ring. So we can ask what happens when we cup uE with itself. Theorem. We have uE uE = Φ(e(E)) = π∗(e(E)) uE ∈ H ∗(E, E#; R). This is just tracing through the definitions. Proof. By construction, we know the following maps commute: H d(E, E#; R) ⊗ H d(E, E#; R) H 2d(E, E#; R) q∗⊗id H d(E; R) ⊗ H d(E, E#; R) 63 10 Vector bundles III Algebraic Topology We claim that the Thom class uE ⊗ uE ∈ H d(E, E#; R) ⊗ H d(E, E#; R) is sent to π∗(e(E)) ⊗ uE ∈ H d(E; R) ⊗ H d(E, E#; R). By definition, this means we need q∗uE = π∗(e(E)), and this is true because π∗ is homotopy inverse to s∗ 0 and e(E) = s∗ 0q∗uE. So if we have two elements Φ(c), Φ(d) ∈ H ∗(E, E#; R), then we have Φ(c) Φ(d) = π∗c uE π∗d uE = ±π∗c π∗d uE uE = ±π∗(c d e(E)) uE = ±Φ
|
(c d e(E)). So e(E) is precisely the information necessary to recover the cohomology ring H ∗(E, E#; R) from H ∗(X; R). Lemma. If π : E → X is a d-dimensional R-module vector bundle with d odd, then 2e(E) = 0 ∈ H d(X; R). Proof. Consider the map α : E → E given by negation on each fiber. This then gives an isomorphism a∗ : H d(E, E#; R) ∼= H d(E, E#; R). This acts by negation on the Thom class, i.e. a∗(uE) = −uE, as on the fiber Ex, we know a is given by an odd number of reflections, each x ; R) by −1 (by the analogous result on Sn). So we of which acts on H d(Ex, E# change εx by a sign. We then lift this to a statement about uE by the fact that uE is the unique thing that restricts to εx for each x. But we also know which implies a ◦ s0 = s0, 0(a∗(uE)) = s∗ s∗ 0(uE). Combining this with the result that a∗(uE) = −uE, we get that 2e(E) = 2s∗ 0(uE) = 0. This is a disappointing result, because if we already know that H d(X; R) has no 2-torsion, then e(E) = 0. After all that fun, we prove the Thom isomorphism theorem. Proof of Thom isomorphism theorem. We will drop the “R” in all our diagrams for readability (and also so that it fits in the page). We first consider the case where the bundle is trivial, so E = X × Rd. Then we note that H ∗(Rd, Rd \ {0}) = R ∗ = d ∗ = d 0. 64 10 Vector bundles III Algebraic Topology In particular, the modules are free, and (a relative version of) K¨unneth’s theorem tells us the map × : H
|
∗(X) ⊗ H ∗(Rd, Rd \ {0}) ∼= H ∗(X × Rd, X × (Rd \ {0})) is an isomorphism. Then the claims of the Thom isomorphism theorem follow immediately. (i) For i < d, all the summands corresponding to H i(X × Rd, X × (Rd \ {0})) vanish since the H ∗(Rd, Rd \ {0}) term vanishes. (ii) The only non-vanishing summand for H d(X × Rd, X × (Rd \ {0}) is H 0(X) ⊗ H d(Rd, Rd \ {0}). Then the Thom class must be 1 ⊗ u, where u is the object corresponding to εx ∈ H d(Ex, E# x ) = H d(Rd, Rd \ {0}), and this is unique. (iii) We notice that Φ is just given by Φ(x) = π∗(x) uE = x × uE, which is an isomorphism by K¨unneth. We now patch the result up for a general bundle. Suppose π : E → X is a bundle. Then it has an open cover of trivializations, and moreover if we assume our X is compact, there are finitely many of them. So it suffices to show that if U, V ⊆ X are open sets such that the Thom isomorphism the holds for E restricted to U, V, U ∩ V, then it also holds on U ∪ V. The relative Mayer-Vietoris sequence gives us H d−1(E|U ∩V, E#|U ∩V ) ∂M V H d(E|U ∪V, E#|U ∪V ) H d(E|U, E#|U ) ⊕ H d(E|V, E#|V ) H d(E|U ∩V, E#|U ∩V ). We first construct the Thom class. We have uE|V ∈ H d(E|V, E#), uE|U ∈ H d(E|U, E#). We claim that (uE|U,
|
uE|V ) ∈ H d(E|U, E#|U ) ⊕ H d(E|V, E#|V ) gets sent to 0 by i∗ V. Indeed, both the restriction of uE|U and uE|V to U ∩ V are Thom classes, so they are equal by uniqueness, so the difference vanishes. U − i∗ Then by exactness, there must be some uE|U ∪V ∈ H d(E|U ∪V, E#|U ∪V ) that restricts to uE|U and uE|V in U and V respectively. Then this must be a Thom class, since the property of being a Thom class is checked on each fiber. Moreover, we get uniqueness because H d−1(E|U ∩V, E#|U ∩V ) = 0, so uE|U and uE|V must be the restriction of a unique thing. The last part in the Thom isomorphism theorem come from a routine application of the five lemma, and the first part follows from the last as previously mentioned. 65 10 Vector bundles III Algebraic Topology 10.4 Gysin sequence Now we do something interesting with vector bundles. We will come up with a long exact sequence associated to a vector bundle, known as the Gysin sequence. We will then use the Gysin sequence to deduce something about the base space. Suppose we have a d-dimensional vector bundle π : E → X that is R-oriented. We want to talk about the unit sphere in every fiber of E. But to do so, we need to have a notion of length, and to do that, we want an inner product. But luckily, we do have one, and we know that any two norms on a finite-dimensional vector space are equivalent. So we might as well arbitrarily choose one. Definition (Sphere bundle). Let π : E → X be a vector bundle, and let ·, · : E ⊗ E → R be an inner product, and let S(E) = {v ∈ E; v, v = 1} ⊆ E. This is the sphere bundle associated to E. Since the unit sphere is homotopy equivalent to Rd \ {
|
0}, we know the inclusion j : S(E) E# is a homotopy equivalence, with inverse given by normalization. The long exact sequence for the pair (E, E#) gives (as before, we do not write the R): H i+d(E, E#) H i+d(E) H i+d(E#) H i+d+1(E, E#) Φ π∗ s∗ 0 j∗ Φ H i(X) · e(E) H i+d(X) p∗ H i+d(S(E)) p! H i+1(X) where p : S(E) → E is the projection, and p! is whatever makes the diagram commutes (since j∗ and Φ are isomorphisms). The bottom sequence is the Gysin sequence, and it is exact because the top row is exact. This is in fact a long exact sequence of H ∗(X; R)-modules, i.e. the maps commute with cup products. Example. Let L = γC 1,n+1 → CPn = Gr1(Cn+1) be the tautological 1 complex dimensional vector bundle on Gr1(Cn+1). This is Z-oriented as any complex vector bundle is, because if we consider the inclusion GL1(C) → GL2(R) obtained by pretending C is R2, we know GL1(C) is connected, so lands in the component of the identity, so has positive determinant. The sphere bundle consists of S(L) = {(V, v) ∈ CPn × Cn+1 : v ∈ V, |v| = 1} ∼= {v ∈ Cn+1 : |v| = 1} ∼= S2n+1, where the middle isomorphism is given by (V, v) (v, v) v v 66 10 Vector bundles III Algebraic Topology The Gysin sequence is H i+1(S2n+1) p! H i(CPn) e(L) H i+2(CPn) p∗ H i+2(S2n+1) Now if i ≤ 2n − 2, then both outer terms are 0. So the maps in the middle are isomorphisms. Thus we get
|
isomorphisms H 0(CPn) e(L) H 2(CPn) e(L) H 4(CPn(L) Z · e(L)2 Similarly, we know that the terms in the odd degree vanish. Checking what happens at the end points carefully, the conclusion is that H ∗(CPn) = Z[e(L)]/(e(L)n+1) as a ring. Example. We do the real case of the above computation. We have K = γR 1,n+1 → RPn = Gr1(Rn+1). The previous trick doesn’t work, and indeed this isn’t Z-orientable. However, it is F2-oriented as every vector bundle is, and by exactly the same argument, we know S(L) ∼= Sn. So by the same argument as above, we will find that H ∗(RPn, F2) ∼= F2[e(L)]/(e(L)n+1). Note that this is different from the one we had before, because here deg e(L) = 1, while the complex case had degree 2. 67 11 Manifolds and Poincar´e duality III Algebraic Topology 11 Manifolds and Poincar´e duality We are going to prove Poincar´e duality, and then use it to prove a lot of things about manifolds. Poincar´e duality tells us that for a compact oriented manifold M of dimension n, we have Hd(M ) ∼= H n−d(M ). To prove this, we will want to induct over covers of M. However, given a compact manifold, the open covers are in general not compact. We get compactness only when we join all of them up. So we need to come up with a version of Poincar´e duality that works for non-compact manifolds, which is less pretty and requires the notion of compactly supported cohomology. 11.1 Compactly supported cohomology Definition (Support of cochain). Let ϕ ∈ C n(X) be a cochain. We say ϕ has support in S ⊆ X if whenever σ : ∆n → X \ S ⊆ X, then ϕ(σ) = 0.
|
In this case, dϕ also has support in S. Note that this has a slight subtlety. The definition only requires that if σ lies completely outside S, then ϕ(σ) vanishes. However, we can have simplices that extends very far and only touches S slightly, and the support does not tell us anything about the value of σ. Later, we will get around this problem by doing sufficient barycentric subdivision. Definition (Compactly-supported cochain). Let C·c (X) ⊆ C·(X) be the subchain complex consisting of these ϕ which has support in some compact K ⊆ X. Note that this makes sense — we have seen that if ϕ has support in K, then dϕ has support in K. To see it is indeed a sub-chain complex, we need to show that C·c (X) is a subgroup! Fortunately, if ϕ has support on K, and ψ has support in L, then ϕ + ψ has support in K ∪ L, which is compact. Definition (Compactly-supported cohomology). The compactly supported cohomology of X is c (X) = H ∗(C· H ∗ c (X)). Note that we can write C· c (X) = C·(X, X \ K) ⊆ C·(X). K compact We would like to say that the compactly supported cohomology is also “built out of” those relative cohomology, but we cannot just take the union, because the relative cohomology is not a subgroup of H ∗(X). To do that, we need something more fancy. Definition (Directed set). A directed set is a partial order (I, ≤) such that for all i, j ∈ I, there is some k ∈ I such that i ≤ k and j ≤ k. Example. Any total order is a directed system. Example. N with divisibility | as the partial order is a directed system. 68 11 Manifolds and Poincar´e duality III Algebraic Topology Definition (Direct limit). Let I be a directed set. An direct system of abelian groups indexed by I is a collection of abel
|
ian groups Gi for each i ∈ I and homomorphisms for all i, j ∈ I such that i ≤ j, such that ρij : Gi → Gj and ρii = idGi ρik = ρjk ◦ ρij whenever i ≤ j ≤ k. We define the direct limit on the system (Gi, ρij) to be Gi = lim −→ i∈I i∈I Gi /x − ρij(x) : x ∈ Gi. The underlying set of it is Gi /{x ∼ ρij(x) : x ∈ Gi}. i∈I In terms of the second description, the group operation is given as follows: given x ∈ Gi and y ∈ Gj, we find some k such that i, j ≤ k. Then we can view x, y as elements as Gk and do the operation there. It is an exercise to show that these two descriptions are indeed the same. Now observe that if J ⊆ I is a sub-directed set such that for all a ∈ I, there is some b ∈ J such that a ≤ b. Then we have Gi lim −→ i∈J ∼= lim −→ i∈I Gi. So our claim is now Theorem. For any space X, we let K(X) = {K ⊆ X : K is compact}. This is a directed set under inclusion, and the map K → H n(X, X \ K) gives a direct system of abelian groups indexed by K(X), where the maps ρ are given by restriction. Then we have H ∗ c (X) ∼= lim −→ K(X) H n(X, X \ K). Proof. We have c (X) ∼= lim C n −→ K(X) C n(X, X \ K), 69 11 Manifolds and Poincar´e duality III Algebraic Topology where we have a map C n(X, X \ K) → C n c (X) lim −→ K(α) given in each component of the direct limit by inclusion, and it is easy to see that this is well-defined and bijective. It is then a general algebraic fact that H ∗ commutes with inverse limits, and we will not prove
|
it. Lemma. We have H i c(Rd; R) ∼= R i = d 0 otherwise. Proof. Let B ∈ K(Rd) be the balls, namely B = {nDd, n = 0, 1, 2, · · · }. Then since every compact set is contained in one of them, we have H n c (X) ∼= lim −→ K∈K(Rd) H n(Rd, Rd \ K; R) ∼= lim −→ nDd∈B H n(Rd, Rd \ nDd; R) We can compute that directly. Since Rd is contractible, the connecting map H i(Rd, Rd \ nDd; R) → H i−1(Rd \ nDd; R) in the long exact sequence is an isomorphism. Moreover, the following diagram commutes: H i(Rd, Rd \ nDn; R) ρn,n+1 H i(Rd, Rd \ (n + 1)Dd; R) ∂ ∂ H i−1(Rd \ nDd; R) H i−1(Rd \ (n + 1)Dd; R) But all maps here are isomorphisms because the horizontal maps are homotopy equivalences. So we know H i(Rd, Rd \ nDd; R) ∼= H i(Rd, Rd \ {0}; R) ∼= H i−1(Rd \ {0}; R). lim −→ So it follows that H i(Rd, Rd \ {0}; R) = R i = d 0 otherwise. In general, this is how we always compute compactly-supported cohomology — we pick a suitable subset of K(X) and compute the limit of that instead. Note that compactly-supported cohomology is not homotopy-invariant! It knows about the dimension of Rd, since the notion of compactness is not homotopy invariant. Even worse, in general, a map f : X → Y does not induce a map f ∗ : H ∗ c (X). Indeed, the usual map does not work because the preimage of a compact set of a compact set is not necessarily compact. c (Y ) → H ∗ 70 11 Manifolds and Poincar´
|
e duality III Algebraic Topology Definition (Proper map). A map f : X → Y of spaces is proper if the preimage of a compact space is compact. Now if f is proper, then it does induce a map H ∗ c ( · ) by the usual construction. From now on, we will assume all spaces are Hausdorff, so that all compact subsets are closed. This isn’t too bad a restriction since we are ultimately interested in manifolds, which are by definition Hausdorff. Let i : U → X be the inclusion of an open subspace. We let K ⊆ U be compact. Then by excision, we have an isomorphism H ∗(U, U \ K) ∼= H ∗(X, X \ K), since the thing we cut out, namely X \ U, is already closed, and U \ K is open, since K is closed. So a compactly supported cohomology class on U gives one on X. So we get a map i∗ : H ∗ c (U ) → H ∗ We call this “extension by zero”. Indeed, this is how the cohomology class works — if you have a cohomology class φ on U supported on K ⊆ U, then given any simplex in X, if it lies inside U, we know how to evaluate it. If it lies outside K, then we just send it to zero. Then by barycentric subdivision, we can assume every simplex is either inside U or outside K, so we are done. c (X). Example. If i : U → Rd is an open ball, then the map iRd) is an isomorphism. So each cohomology class is equivalent to something with a support as small as we like. Since it is annoying to write H n(X, X \ K) all the time, we write H n(X | K; R) = H n(X, X \ K; R). By excision, this depends only on a neighbourhood of K in X. In the case where K is a point, this is local cohomology at a point. So it makes sense to call this local cohomology near K ⊆ X. Our end goal is to produce a Mayer-Vietoris for compactly
|
-supported coho- mology. But before we do that, we first do it for local cohomology. Proposition. Let K, L ⊆ X be compact. Then there is a long exact sequence H n(X | K ∩ L) H n(X | K) ⊕ H n(X | L) H n(X | K ∪ L) H n+1(X | K ∩ L) ∂ H n+1(X | K) ⊕ H n+1(X | L) · · ·, where the unlabelled maps are those induced by inclusion. We are going to prove this by relating it to a Mayer-Vietoris sequence of some sort. 71 11 Manifolds and Poincar´e duality III Algebraic Topology Proof. We cover X \ K ∩ L by U = {X \ K, X \ L}. We then draw a huge diagram (here ∗ denotes the dual, i.e. X ∗ = Hom(X; R), and C·(X | K) = C·(X, X \ K)): 0 0 0 0 0 0 ∗ C·(X) CU· (X\K∩L) C·(X | K) ⊕ C·(X | L) C·(X | K ∪ L) C·(X) (id,− id) C·(X) ⊕ C·(X) id + id C·(X) C· (j∗ U (X \ K ∩ L) C·(X \ K) ⊕ C·(X \ L) 1,−j∗ 2 ) 1 +i∗ i∗ 2 C·(X \ K ∪ L) 0 0 0 0 0 0 This is a diagram. Certainly. The bottom two rows and all columns are exact. By a diagram chase (the nine lemma), we know the top row is exact. Taking the long exact sequence almost gives what we want, except the first term is a funny thing. We now analyze that object. We look at the left vertical column: 0 Hom C·(x) CU· (X\K∩L), R C·(X) Hom(C U· (X \ K ∩ L), R) 0 Now by the small simplices
|
theorem, the right hand object gives the same (co)homology as C·(X \ K ∩ L; R). So we can produce another short exact sequence: 0 0 Hom C·(x) CU· (X\(K∩L)), R C·(X) Hom(C U· (X \ K ∩ L), R) C·(X, X \ K ∩ L) C·(X) Hom(C·(X \ K ∩ L), R) 0 0 Now the two right vertical arrows induce isomorphisms when we pass on to homology. So by taking the long exact sequence, the five lemma tells us the left hand map is an isomorphism on homology. So we know H∗ Hom C·(x) C U· (X \ (K ∩ L)), R ∼= H ∗(X | K ∩ L). So the long exact of the top row gives what we want. 72 11 Manifolds and Poincar´e duality III Algebraic Topology Corollary. Let X be a manifold, and X = A ∪ B, where A, B are open sets. Then there is a long exact sequence H n c (A ∩ B) H n c (A) ⊕ H n c (B) H n c (X) H n+1 c (A ∩ B) H n+1 c (A) ⊕ H n+1 c (B) · · · ∂ Note that the arrows go in funny directions, which is different from both homology and cohomology versions! Proof. Let K ⊆ A and L ⊆ B be compact sets. Then by excision, we have isomorphisms H n(X | K) ∼= H n(A | K) H n(X | L) ∼= H n(B | L) H n(X | K ∩ L) ∼= H n(A ∩ B | K ∩ L). So the long exact sequence from the previous proposition gives us H n(A ∩ B | K ∩ L) H n(A | K) ⊕ H n(B | L) H n(X | K ∪ L) H n+1(A ∩ B | K ∩ L) H n+1(A |
|
K) ⊕ H n+1(B | L) · · · ∂ The next step is to take the direct limit over K ∈ K(A) and L ∈ K(B). We need to make sure that these do indeed give the right compactly supported cohomology. The terms H n(A | K) ⊕ H n(B | L) are exactly right, and the one for H n(A ∩ B | K ∩ L) also works because every compact set in A ∩ B is a compact set in A intersect a compact set in B (take those to be both the original compact set). So we get a long exact sequence H n c (A ∩ B) H n c (A) ⊕ H n c (B) lim −→ K∈K(A) L∈K(B) H n(X | K ∪ L) H n+1 ∂ c (A ∩ B) To show that that funny direct limit is really what we want, we have to show that every compact set C ∈ X lies inside some K ∪ L, where K ⊆ A and L ⊆ B are compact. Indeed, as X is a manifold, and C is compact, we can find a finite set of closed balls in X, each in A or in B, such that their interiors cover C. So done. (In general, this will work for any locally compact space) This is all we have got to say about compactly supported cohomology. We now start to talk about manifolds. 11.2 Orientation of manifolds Similar to the case of vector bundles, we will only work with manifolds with orientation. The definition of orientation of a manifold is somewhat similar 73 11 Manifolds and Poincar´e duality III Algebraic Topology to the definition of orientation of a vector bundle. Indeed, it is true that an orientation of a manifold is just an orientation of the tangent bundle, but we will not go down that route, because the description we use for orientation here will be useful later on. After defining orientation, we will prove a result similar to (the first two parts of) the Thom isomorphism theorem. For a d-manifold M and x ∈ M, we
|
know that Hi(M | x; R) ∼= R i = d i = d 0. We can then define a local orientation of a manifold to be a generator of this group. Definition (Local R-orientation of manifold). Fr a d-manifold M, a local Rorientation of M at x is an R-module generator µx = Hd(M | x; R). Definition (R-orientation). An R-orientation of M is a collection {µx}x∈M of local R-orientations such that if is a chart of M, and p, q ∈ Rd, then the composition of isomorphisms ϕ : Rd → U ⊆ M Hd(M | ϕ(p)) Hd(M | ϕ(q)) ∼ ∼ Hd(U | ϕ(p)) Hd(U | ϕ(q)) ∼ ϕ∗ ∼ ϕ∗ Hd(Rd | p) ∼ Hd(Rd | q) sends µx to µy, where the vertical isomorphism is induced by a translation of Rd. Definition (Orientation-preserving homeomorphism). For a homomorphism f : U → V with U, V ∈ Rd open, we say f is R-orientation-preserving if for each x ∈ U, and y = f (x), the composition Hd(Rd | 0; R) translation Hd(Rd | x; R) excision Hd(U | x; R) Hd(Rd | 0; R) translation Hd(Rd | y; R) excision Hd(V | y; R) f∗ is the identity Hd(Rd | 0; R) → Hd(Rd | 0; R). As before, we have the following lemma: Lemma. (i) If R = F2, then every manifold is R-orientable. (ii) If {ϕα : Rd → Uα ⊆ M } is an open cover of M by Euclidean space such that each homeomorphism Rd ⊇ ϕ−1 α (Uα ∩ Uβ) ϕ−1 α Uα ∩ Uβ ϕ−1 β β (
|
Uα ∩ Uβ) ⊆ Rd ϕ−1 is orientation-preserving, then M is R-orientable. 74 11 Manifolds and Poincar´e duality III Algebraic Topology Proof. (i) F2 has a unique F2-module generator. (ii) For x ∈ Uα, we define µx to be the image of the standard orientation of Rd via Hd(M | x) ∼= Hα(Ud | x) (ϕα)∗ Hd(Rd | ϕ−1 α (x)) Rd(Rd | 0) trans. If this is well-defined, then it is obvious that this is compatible. However, we have to check it is well-defined, because to define this, we need to pick a chart. If x ∈ Uβ as well, we need to look at the corresponding µ instead. But they have to agree by definition of orientation-preserving. x defined using Uβ Finally, we get to the theorem: Theorem. Let M be an R-oriented manifold and A ⊆ M be compact. Then (i) There is a unique class µA ∈ Hd(M | A; R) which restricts to µx ∈ Hd(M | x; R) for all x ∈ A. (ii) Hi(M | A; R) = 0 for i > d. Proof. Call a compact set A “good” if it satisfies the conclusion of the theorem. Claim. We first show that if K, L and K ∩ L is good, then K ∪ L is good. This is analogous to the proof of the Thom isomorphism theorem, and we will omit this. Now our strategy is to prove the following in order: (i) If A ⊆ Rd is convex, then A is good. (ii) If A ⊆ Rd, then A is good. (iii) If A ⊆ M, then A is good. Claim. If A ⊆ Rd is convex, then A is good. Let x ∈ A. Then we have an inclusion Rd \ A → Rd \ {x}. This is in fact a homotopy equivalence
|
by scaling away from x. Thus the map Hi(Rd | A) → Hi(Rd | x) is an isomorphism by the five lemma for all i. Then in degree d, there is some µA corresponding to µx. This µA is then has the required property by definition of orientability. The second part of the theorem also follows by what we know about Hi(Rd | x). Claim. If A ⊆ Rd, then A is good. 75 11 Manifolds and Poincar´e duality III Algebraic Topology For A ⊆ Rd compact, we can find a finite collection of closed balls Bi such that A ⊆ n i=1 ˚Bi = B. Moreover, if U ⊇ A for any open U, then we can in fact take Bi ⊆ U. By induction on the number of balls n, the first claim tells us that any B of this form is good. We now let G = {B ⊆ Rd : A ⊆ ˚B, B compact and good}. We claim that this is a directed set under inverse inclusion. To see this, for B, B ∈ G, we need to find a B ∈ G such that B ⊆ B, B and B is good and compact. But the above argument tells us we can find one contained in ˚B ∪ ˚B. So we are safe. Now consider the directed system of groups given by and there is an induced map B → Hi(Rd | B), Hi(Rd | B) → Hi(Rd | A), lim −→ B∈G since each Hi(Rd | B) maps to Hi(Rd | A) by inclusion, and these maps are compatible. We claim that this is an isomorphism. We first show that this is surjective. Let [c] ∈ Hi(Rd | A). Then the boundary of c ∈ Ci(Rd) is a finite sum of simplices in Rd \ A. So it is a sum of simplices in some compact C ⊆ Rd \ A. But then A ⊆ Rd \ C, and Rd \ C is an open neighbourhood of A. So
|
we can find a good B such that A ⊆ B ⊆ Rd \ C. Then c ∈ Ci(Rd | B) is a cycle. So we know [c] ∈ Hi(Rd | B). So the map is surjective. Injectivity is obvious. An immediate consequence of this is that for i > d, we have Hi(Rd | A) = 0. Also, if i = d, we know that µA is given uniquely by the collection {µB}B∈G (uniqueness follows from injectivity). Claim. If A ⊆ M, then A is good. This follows from the fact that any compact A ⊆ M can be written as a ∼= Rd. So Aα and their intersections finite union of compact Aα with Aα ⊆ Uα are good. So done. Corollary. If M is compact, then we get a unique class [M ] = µM ∈ Hn(M ; R) such that it restricts to µx for each x ∈ M. Moreover, H i(M ; R) = 0 for i > d. This is not too surprising, actually. If we have a triangulation of the manifold, then this [M ] is just the sum of all the triangles. Definition (Fundamental class). The fundamental class of an R-oriented manifold is the unique class [M ] that restricts to µx for each x ∈ M. 76 11 Manifolds and Poincar´e duality III Algebraic Topology 11.3 Poincar´e duality We now get to the main theorem — Poincar´e duality: Theorem (Poincar´e duality). Let M be a d-dimensional R-oriented manifold. Then there is a map DM : H k c (M ; R) → Hd−k(M ; R) that is an isomorphism. The majority of the work is in defining the map. Afterwards, proving it is an isomorphism is a routine exercise with Mayer-Vietoris and the five lemma. What does this tell us? We know that M has no homology or cohomology in negative dimensions. So by Poincar´e duality, there is also no homology or cohomology in dimensions
|
> d. Moreover, if M itself is compact, then we know H 0 c (M ; R) has a special element 1. So we also get a canonical element of Hd(M ; R). But we know there is a special element of Hd(M ; R), namely the fundamental class. They are in fact the same, and this is no coincidence. This is in fact how we are going to produce the map. To define the map DM, we need the notion of the cap product Definition (Cap product). The cap product is defined by · · : Ck(X; R) × C (X; R) → Ck−(X; R) (σ, ϕ) → ϕ(σ|[v0,...,v])σ|[v,...,vk]. We want this to induce a map on homology. To do so, we need to know how it interacts with differentials. Lemma. We have d(σ ϕ) = (−1)d((dσ) ϕ − σ (dϕ)). Proof. Write both sides out. As with the analogous formula for the cup product, this implies we get a well-defined map on homology and cohomology, i.e. We obtain a map Hk(X; R) × H (X; R) → Hk−(X; R). As with the cup product, there are also relative versions : Hk(X, A; R) × H (X; R) → Hk−(X, A; R) and : Hk(X, A; R) × H (X, A; R) → Hk−(X; R). We would like to say that the cap product is natural, but since maps of spaces induce maps of homology and cohomology in opposite directions, this is rather tricky. What we manage to get is the following: 77 11 Manifolds and Poincar´e duality III Algebraic Topology Lemma. If f : X → Y is a map, and x ∈ Hk(X; R) And y ∈ H (Y ; R), then we have f∗(x) y = f∗(x f ∗(y)) ∈ Hk−(Y ; R). In other words
|
, the following diagram commutes: Hk(Y ; R) × H (Y ; R) Hk−(Y ; R) f∗×id Hk(X; R) × H (Y ; R) id ×f ∗ f∗ Hk(X; R) × H (X; R) Hk−(X; R) Proof. We check this on the cochain level. We let x = σ : ∆k → X. Then we have f#(σ f #y) = f# (f #y)(σ|[v0,...,v])σ|[v,...,vk] = y(f#(σ|[v0,...,v]))f#(σ|[v,...,vk]) = y((f#σ)|[v0,...,v])(f#σ)|[v,...,vk] = (f#σ) y. So done. Now if M is compact, then we simply define the duality map as DM = [M ] · : H d(M ; R) → Hd−(M ; R). If not, we note that H d c (M ; R) is the directed limit of H d(M, K; R) with K compact, and each H d(M, L; R) has a fundamental class. So we can define the required map for each K, and then put them together. More precisely, if K ⊆ L ⊆ M are such that K, L are compact, then we have an inclusion map (id, inc) : (M, M \ L) → (M, M \ K). Then we have an induced map (id, inc) : Hd(M | L; R) → Hd(M | K; R) that sends the fundamental class µL to µK, by uniqueness of the fundamental class. Then the relative version of our lemma tells us that the following map commutes: H (M | K; R) (id,inc)∗ H (M | L; R) µK · µL · Hd−(M ; R) id Hd−(M ; R) Indeed, this is just saying that (id)∗(µL (id, inc)∗(ϕ)) = µK ϕ. 78 11
|
Manifolds and Poincar´e duality III Algebraic Topology So we get a duality map DM = lim −→ (µK · ) : lim −→ H (M | K; R) → Hd−(M ; R). Now we can prove Poincar´e duality. Proof. We say M is “good” if the Poincar´e duality theorem holds for M. We now do the most important step in the proof: Claim 0. Rd is good. The only non-trivial degrees to check are = 0, d, and = 0 is straightforward. For = d, we have shown that the maps H d c (Rd; R) ∼ H d(Rd | 0; R) UCT HomR(Hd(Rd | 0; R), R) are isomorphisms, where the last map is given by the universal coefficients theorem. Under these isomorphisms, the map H d c (Rd; R) DRd H0(Rd; R) ε R corresponds to the map HomK(Hd(Rd | 0; R), R) → R is given by evaluating a function at the fundamental class µ0. But as µ0 ∈ Hd(Rd | 0; R) is an R-module generator, this map is an isomorphism. Claim 1. If M = U ∪ V and U, V, U ∩ V are good, then M is good. Again, this is an application of the five lemma with the Mayer-Vietoris sequence. We have H c (U V ) H c (U ) ⊕ H d c (V ) DUV DU ⊕DV H c (M ) DM H +1 c (U V ) DUV Hd−(U V ) Hd−(U ) ⊕ Hd−(V ) Hd−(M ) Hd−−1(U V ) We are done by the five lemma if this commutes. But unfortunately, it doesn’t. It only commutes up to a sign, but it is sufficient for the five lemma to apply if we trace through the proof of the five lemma. Claim 2. If U1 ⊆ U2 ⊆
|
· · · with M = good. n Un, and Ui are all good, then M is Any compact set in M lies in some Un, so the map lim −→ H c (Un) → H c (Un) is an isomorphism. Similarly, since simplices are compact, we also have Hd−k(M ) = lim −→ Hd−k(Un). Since the direct limit of open sets is open, we are done. Claim 3. Any open subset of Rd is good. 79 11 Manifolds and Poincar´e duality III Algebraic Topology Any U is a countable union of open balls (something something rational points something something). For finite unions, we can use Claims 0 and 1 and induction. For countable unions, we use Claim 2. Claim 4. If M has a countable cover by Rd’s it is good. Same argument as above, where we instead use Claim 3 instead of Claim 0 for the base case. Claim 5. Any manifold M is good. Any manifold is second-countable by definition, so has a countable open cover by Rd. Corollary. For any compact d-dimensional R-oriented manifold M, the map [M ] · : H (M ; R) → Hd−(M ; R) is an isomorphism. Corollary. Let M be an odd-dimensional compact manifold. Then the Euler characteristic χ(M ) = 0. Proof. Pick R = F2. Then M is F2-oriented. Since we can compute Euler characteristics using coefficients in F2. We then have χ(M ) = 2n+1 (−1)i dimF2 Hi(M, F2). r=0 But we know Hi(M, F2) ∼= H 2n+1−i(M, F2) ∼= (H2n+1−i(M, F2))∗ ∼= H2n+1−i(M, F2) by Poincar´e duality and the universal coefficients theorem. But the dimensions of these show up in the sum above with opposite signs. So they cancel, and χ(M ) = 0. What is the relation between the cap product and the cup product? If σ ∈ Ck+(X; R),
|
ϕ ∈ C k(X; R) and ψ ∈ C (X; R), then ψ(σ ϕ) = ψ(ϕ(σ|[v0,...,vk])σ|[vk,...,vk+]) = ϕ(σ|[v0,...,vk])ψ(σ|[vk,...,vk+]) = (ϕ ψ)(σ), Since we like diagrams, we can express this equality in terms of some diagram commuting. The map h : H k(X; R) → HomR(Hk(X; R), R) in the universal coefficients theorem is given by [ϕ] → ([σ] → ϕ(σ)). This map exists all the time, even if the hypothesis of the universal coefficients theorem does not hold. It’s just that it need not be an isomorphism. The formula ψ(σ ϕ) = (ϕ ψ)(σ) 80 11 Manifolds and Poincar´e duality III Algebraic Topology then translates to the following diagram commuting: H (X; R) ϕ · H k+(X; R) h h HomR(H(X; R), R) ( · ϕ)∗ HomR(H+k(X; R), R) Now when the universal coefficient theorem applies, then the maps h are isomorphisms. So the cup product and cap product determine each other, and contain the same information. Now since Poincar´e duality is expressed in terms of cap products, this correspondence gives us some information about cupping as well. Theorem. Let M be a d-dimensional compact R-oriented manifold, and consider the following pairing: ·, · : H k(M ; R) ⊗ H d−k(M, R) R. [ϕ] ⊗ [ψ] (ϕ ψ)[M ] If H∗(M ; R) is free, then ·, · is non-singular, i.e. both adjoints are isomorphisms, i.e. both H k(M ; R) Hom(H d−k(M ; R), R) [ϕ] ([ψ] → ϕ, ψ) and the other way round
|
are isomorphisms. Proof. We have as we know ϕ, ψ = (−1)|ϕ||ψ|ψ, ϕ, ϕ ψ = (−1)|ϕ||ψ|ψ ϕ. So if one adjoint is an isomorphism, then so is the other. To see that they are isomorphsims, we notice that we have an isomorphism H k(M ; R) UCT HomR(Hk(M ; R), R) D∗ m HomR(H d−k(M ; R), R). [ϕ] ([σ] → ϕ(σ)) ([ψ] → ϕ([M ] ψ)) But we know ϕ([M ] ψ) = (ψ ϕ)([M ]) = ψ, ϕ. So this is just the adjoint. So the adjoint is an isomorphism. This is a very useful result. We have already seen examples where we can figure out if a cup product vanishes. But this result tells us that certain cup products are not zero. This is something we haven’t seen before. 81 11 Manifolds and Poincar´e duality III Algebraic Topology Example. Consider CPn. We have previously computed H∗(CPn, Z) = Z ∗ = 2i, 0 otherwise 0 ≤ i ≤ n Also, we know CPn is Z-oriented, because it is a complex manifold. Let’s forget that we have computed the cohomology ring structure, and compute it again. Suppose for induction, that we have found H ∗(CPn−1, Z) = Z[x]/(xn). As CPn is obtained from CPn−1 by attaching a 2n cell, the map given by inclusion i∗ : H 2(CPn, Z) → H 2(CPn−1, Z) is an isomorphism. Then the generator x ∈ H 2(CPn−1, Z) gives us a generator y ∈ H 2(CPn, Z). Now if we can show that yk ∈ H 2k(CPn, Z) ∼= Z is a generator for all k, then H ∗(CPn, Z) ∼= Z[y]/(yn+1). But we know that yn−
|
1 generates H 2n−2(CPn, Z), since it pulls back under i∗ to xn−1, which is a generator. Finally, consider H 2(CP2, Z) ⊗ H 2n−2(CP2, Z) Z y ⊗ yn−1 yn[CPn]. Since this is non-singular, we know yn ∈ H 2n(CPn, Z) must be a generator. Of course, we can get H ∗(RPn, F2) similarly. 11.4 Applications We go through two rather straightforward applications, before we move on to bigger things like the intersection product. Signature We focus on the case where d = 2n is even. Then we have, in particular, a non-degenerate bilinear form ·, · : H n(M ; R) ⊗ H n(M ; R) → R. Also, we know a, b = (−1)n2 b, a = (−1)nb, a. So this is a symmetric form if n is even, and skew-symmetric form if n is odd. These are very different scenario. For example, we know a symmetric matrix is diagonalizable with real eigenvalues (if R = R), but a skew-symmetric form does not have these properties. So if M is 4k-dimensional, and Z-oriented, then in particular M is R-oriented. Then the map ·, · : H 2k(M ; R) ⊗ H 2k(M ; R) → R can be represented by a symmetric real matrix, which can be diagonalized. This has some real eigenvalues. The eigenvalues can be positive or negative. 82 11 Manifolds and Poincar´e duality III Algebraic Topology Definition (Signature of manifold). Let M be a 4k-dimensional Z-oriented manifold. Then the signature is the number of positive eigenvalues of ·, · : H 2k(M ; R) ⊗ H 2k(M ; R) → R minus the number of negative eigenvalues. We write this as sgn(M ). By Sylvester’s law of inertia, this is well-defined. Fact. If M = ∂W for some compact 4k +
|
1-dimensional manifold W with boundary, then sgn(M ) = 0. Example. CP2 has H 2(CP2; R) ∼= R, and the bilinear form is represented by the matrix (1). So the signature is 1. So CP2 is not the boundary of a manifold. Degree Recall we defined the degree of a map from a sphere to itself. But if we have a Z-oriented space, we can have the fundamental class [M ], and then there is an obvious way to define the degree. Definition (Degree of map). If M, N are d-dimensional compact connected Z-oriented manifolds, and f : M → N, then f∗([M ]) ∈ Hd(N, Z) ∼= Z[N ]. So f∗([M ]) = k[N ] for some k. This k is called the degree of f, written deg(f ). If N = M = Sn and we pick the same orientation for them, then this recovers our previous definition. By exactly the same proof, we can compute this degree using local degrees, just as in the case of a sphere. Corollary. Let f : M → N be a map between manifolds. If F is a field and deg(f ) = 0 ∈ F, then then the induced map f ∗ : H ∗(N, F) → H ∗(M, F) is injective. This seems just like an amusement, but this is powerful. We know this is in fact a map of rings. So if we know how to compute cup products in H ∗(M ; F), then we can do it for H ∗(N ; F) as well. Proof. Suppose not. Let α ∈ H k(N, F) be non-zero but f ∗(α) = 0. As ·, · : H k(N, F) ⊗ H d−k(N, F) → F is non-singular, we know there is some β ∈ H d−k(N, F) such that Then we have α, β = (α β)[N ] = 1. deg(f ) = deg(f ) · 1 = (α β)(deg(f )[N ]) = (α β)(f∗
|
[M ]) = (f ∗(α) f ∗(β))([M ]) = 0. This is a contradiction. 83 11 Manifolds and Poincar´e duality III Algebraic Topology 11.5 Intersection product Recall that cohomology comes with a cup product. Thus, Poincar´e duality gives us a product on homology. Our goal in this section is to understand this product. We restrict our attention to smooth manifolds, so that we can talk about the tangent bundle. Recall (from the example sheet) that an orientation of the manifold is the same as an orientation on the tangent bundle. We will consider homology classes that come from submanifolds. For concreteness, let M be a compact smooth R-oriented manifold, and N ⊆ M be an n-dimensional R-oriented submanifold. Let i : N → M be the inclusion. Suppose dim M = d and dim N = n. Then we obtain a canonical homology class i∗[N ] ∈ Hn(M ; R). We will abuse notation and write [N ] for i∗[N ]. This may or may not be zero. Our objective is to show that under suitable conditions, the product of [N1] and [N2] is [N1 ∩ N2]. To do so, we will have to understand what is the cohomology class Poinacr´e dual to [N ]. We claim that, suitably interpreted, it is the Thom class of the normal bundle. Write νN ⊆M for the normal bundle of N in M. Picking a metric on T M, we can decompose i∗T M ∼= T N ⊕ νN ⊆M, Since T M is oriented, we obtain an orientation on the pullback i∗T M. Similarly, T N is also oriented by assumption. In general, we have the following result: Lemma. Let X be a space and V a vector bundle over X. If V = U ⊕ W, then orientations for any two of U, W, V give an orientation for the third. Proof. Say dim V = d, dim U = n, dim W = m. Then at each point x ∈ X, by K¨unneth’s theorem, we have an isomorphism H d(Vx,
|
V # x ; R) ∼= H n(Ux, U # x ; R) ⊗ H m(Wx, W # x ; R) ∼= R. So any local R-orientation on any two induces one on the third, and it is straightforward to check the local compatibility condition. Can we find a more concrete description of this orientation on νN ⊆M? By c(Rd), we know the same argument as when we showed that H i(Rn | {0}) ∼= H i H i(νN ⊆M, ν# N ⊆M ; R) ∼= H i c(νN ⊆M ; R). Also, by the tubular neighbourhood theorem, we know νN ⊆M is homeomorphic to an open neighbourhood U of N in M. So we get isomorphisms H i c(νN ⊆M ; R) ∼= H i c(U ; R) ∼= Hd−i(U ; R) ∼= Hd−i(N ; R), where the last isomorphism comes from the fact that N is homotopy-equivalent to U. In total, we have an isomorphism H i(νN ⊆M, ν# N ⊆M ; R) ∼= Hd−i(N ; R). 84 11 Manifolds and Poincar´e duality III Algebraic Topology Under this isomorphism, the fundamental class [N ] ∈ Hn(N ; R) corresponds to some EN ⊆M ∈ H d−n(νN ⊆M, ν# N ⊆M ; R) But we know νN ⊆M has dimension d − n. So EN ⊆M is in the right dimension to be a Thom class, and is a generator for H d−n(νN ⊆M, ν# N ⊆M ; R), because it is a generator for Hd−n(N ; R). One can check that this is indeed the Thom class. How is this related to the other things we’ve had? We can draw the commu- tative diagram Hn(N ; R) ∼ Hn(U ; R) i∗ Hn(M ; R) ∼ ∼ H d
|
−n c (U ; R) extension by 0 H d−n(M ; R) The commutativity of the square is a straightforward naturality property of Poincar´e duality. Under the identification H d−n c (U ; R) ∼= H d−n(νN ⊆M, ν# N ⊆M ; R), the above (U ; R) is the Thom class of the says that the image of [N ] ∈ Hn(N ; R) in H d−n normal bundle νN ⊆M. c On the other hand, if we look at this composition via the bottom map, then [N ] gets sent to D−1 M ([N ]). So we know that Theorem. The Poincar´e dual of a submanifold is (the extension by zero of) the normal Thom class. Now if we had two submanifolds N, W ⊆ M. Then the normal Thom classes give us two cohomology classes of M. As promised, When the two intersect nicely, the cup product of their Thom classes is the Thom class of [N ∩ W ]. The niceness condition we need is the following: Definition (Transverse intersection). We say two submanifolds N, W ⊆ M intersect transversely if for all x ∈ N ∩ W, we have Example. We allow intersections like TxN + TxW = TxM. but not this: 85 11 Manifolds and Poincar´e duality III Algebraic Topology It is a fact that we can always “wiggle” the submanifolds a bit so that the intersection is transverse, so this is not too much of a restriction. We will neither make this precise nor prove it. Whenever the intersection is transverse, the intersection N ∩ W will be a submanifold of M, and of N and W as well. Moreover, (νN ∩W ⊆M )x = (νN ⊆M )x ⊕ (νW ⊆M )x. Now consider the inclusions iN : N ∩ W → N iW : N ∩ W → W. Then we have νN ∩W ⊆M = i∗ N (νN �
|
�M ) ⊕ i∗ W (νW ⊆M ). So with some abuse of notation, we can write N EN ⊆M i∗ i∗ W EW ⊆M ∈ H ∗(νN ∩W ⊆M, ν# N ∩W ⊆M ; R), and we can check this gives the Thom class. So we have D−1 M ([N ]) D−1 M ([W ]) = D−1 M ([N ∩ W ]). The slogan is “cup product is Poincar´e dual to intersection”. One might be slightly worried about the graded commutativity of the cup product, because N ∩ W = W ∩ N as manifolds, but in general, D−1 M ([N ]) D−1 M ([W ]) = D−1 M ([W ]) D−1 M ([N ]). The fix is to say that N ∩ W are W ∩ N are not the same as oriented manifolds in general, and sometimes they differ by a sign, but we will not go into details. More generally, we can define Definition (Intersection product). The intersection product on the homology of a compact manifold is given by Hn−k(M ) ⊗ Hn−(M ) Hn−k−(M ) (a, b) Example. We know that a · b = DM (D−1 M (a) D−1 M (b)) H2k(CPn, Z) ∼= Z 0 ≤ k ≤ n otherwise 0. Moreover, we know the generator for these when we computed it using cellular homology, namely [CPk] ≡ yk ∈ H2k(CPn, Z). To compute [CP] · [CP], if we picked the canonical copies of CP, CPk ⊆ CPn, then one would be contained in the other, and this is exactly the opposite of intersecting transversely. 86 11 Manifolds and Poincar´e duality III Algebraic Topology Instead, we pick CPk = {[z0 : z1 · · · zk : 0 : · · · : 0]}, CP = {[0 : · · · : 0 : w0 : · · · : w]}. It is
|
a fact that any two embeddings of CPk → CPn are homotopic, so we can choose these. Now these two manifolds intersect transversely, and the intersection is So this says that CPk ∩ CP = CPk+−n. yk · y = ±yl+−n, where there is some annoying sign we do not bother to figure out. So if xk is Poincar´e dual to yn−k, then xk x = xk+, which is what we have previously found out. Example. Consider the manifold with three holes b1 a1 b2 a2 b3 a3 Then we have ai · bi = {pt}, ai · bj = 0 for i = j. So we find the ring structure of H∗(Eg, Z), hence the ring structure of H ∗(Eg, Z). This is so much easier than everything else we’ve been doing. Here we know that the intersection product is well-defined, so we are free to pick our own nice representatives of the loop to perform the calculation. Of course, this method is not completely general. For example, it would be difficult to visualize this when we work with manifolds with higher dimensions, and more severely, not all homology classes of a manifold have to come from submanifolds (e.g. twice the fundamental class)! 11.6 The diagonal Again, let M be a compact Q-oriented d-dimensional manifold. Then M × M is a 2d-dimensional manifold. We can try to orient it as follows — K¨unneth gives us an isomorphism H d+d(M × M ; Q) ∼= H d(M ; Q) ⊗ H d(M ; Q), as H k(M ; Q) = 0 for k > d. By the universal coefficients theorem, plus the fact that duals commute with tensor products, we have an isomorphism Hd+d(M × M ; Q) ∼= Hd(M ; Q) ⊗ Hd(M ; Q). 87 11 Manifolds and Poincar´e duality III Algebraic Topology Thus the fundamental class [M ] gives us a fundamental class [M × M ] = [M ] ⊗ [M ]. We are going to
|
do some magic that involves the diagonal map ∆ : M M × M x (x, x). This gives us a cohomology class δ = D−1 M ×M (∆∗[M ]) ∈ H d(M × M, Q) ∼= i+j=d H i(M, Q) ⊗ H j(M, Q). It turns out a lot of things we want to do with this δ can be helped a lot by doing the despicable thing called picking a basis. We let {ai} be a basis for H ∗(M, Q). On this vector space, we have a non-singular form ·, · : H ∗(M, Q) ⊗ H ∗(M ; Q) → Q given by ϕ, ψ → (ϕ ψ)([M ]). Last time we were careful about the degrees of the cochains, but here we just say that if ϕ ψ does not have dimension d, then the result is 0. Now let {bi} be the dual basis to the ai using this form, i.e. It turns out δ has a really nice form expressed using this basis: ai, bj = δij. Theorem. We have δ = (−1)|ai|ai ⊗ bi. i Proof. We can certainly write δ = i, Cijai ⊗ bj for some Cij. So we need to figure out what the coefficients Cij are. We try to compute ((bk ⊗ a) δ)[M × M ] = = = Cij(bk ⊗ a) (ai ⊗ bj)[M × M ] Cij(−1)|a||ai|(bk ai) ⊗ (a bi)[M ] ⊗ [M ] Cij(−1)|a||ai|(δik(−1)|ai||bk|)δj = (−1)|ak||a|+|ak||bk|Ck. But we can also compute this a different way, using the definition of δ: (bk ⊗ a δ)[M × M ] = (bk ⊗ a)(∆∗[M ]) = (b
|
k a)[M ] = (−1)|a||bk|δk. So we see that Ck = δk(−1)|a|. 88 11 Manifolds and Poincar´e duality III Algebraic Topology Corollary. We have the Euler characteristic. ∆∗(δ)[M ] = χ(M ), Proof. We note that for a ⊗ b ∈ H n(M × M ), we have ∆∗(a ⊗ b) = ∆∗(π∗ 1a π∗ 2b) = a b because πi ◦ ∆ = id. So we have ∆∗(δ) = (−1)|ai|ai bi. Thus ∆∗(δ)[M ] = (−1)|ai| = i k (−1)k dimQ H k(M ; Q) = χ(M ). So far everything works for an arbitrary manifold. Now we suppose further that M is smooth. Then δ is the Thom class of the normal bundle νM ⊆M ×M of M → M × M. By definition, pulling this back along ∆ to M gives the Euler class of the normal ∼= T M, because the fiber at x is the cokernel of bundle. But we know νM ⊆M ×M the map TxM ∆ TxM ⊕ TxM and the inverse given by gives us an isomorphism Corollary. We have TxM ⊕ TxM → TxM (v, w) → v − w TxM ⊕ TxM ∆TxM ∼= TxM. e(T M )[M ] = χ(M ). Corollary. If M has a nowhere-zero vector field, then χ(M ) = 0. More generally, this tells us that χ(M ) is the number of zeroes of a vector field M → T M (transverse to the zero section) counted with sign. Lemma. Suppose we have R-oriented vector bundles E → X and F → X with Thom classes uE, uF. Then the Thom class for E ⊕ F → X is uE uF. Thus
|
e(E ⊕ F ) = e(E) e(F ). Proof. More precisely, we have projection maps πE E E ⊕ F 89 πF. F 11 Manifolds and Poincar´e duality III Algebraic Topology We let U = π−1 E (E#) and V = π−1 F (F #). Now observe that U ∪ V = (E ⊕ F )#. So if dim E = e, dim F = f, then we have a map H e(E, E#) ⊗ H f (F, F #) π∗ E ⊗π∗ F H e(E ⊕ F, U ) ⊗ H f (E ⊕ F, V ), H e+f (E ⊕ F, (E ⊕ F )#) and it is easy to see that the image of uE ⊗ uF is the Thom class of E ⊕ F by checking the fibers. Corollary. T S2n has no proper subbundles. Proof. We know e(T S2n) = 0 as e(T S2n)[S2] = χ(S2n) = 2. But it cannot be a proper cup product of two classes, since there is nothing in lower cohomology groups. So T S2n is not the sum of two subbundles. Hence T S2n cannot have a proper subbundle E or else T S2n = E ⊕E⊥ (for any choice of inner product). 11.7 Lefschetz fixed point theorem Finally, we are going to prove the Lefschetz fixed point theorem. This is going to be better than the version you get in Part II, because this time we will know how many fixed points there are. So let M be a compact d-dimensional manifold that is Z-oriented, and f : M → M be a map. Now if we want to count the number of fixed points, then we want to make sure the map is “transverse” in some sense, so that there aren’t infinitely many fixed points. It turns out the right
|
condition is that the graph Γf = {(x, f (x)) ∈ M × M } ⊆ M × M has to be transverse to the diagonal. Since Γf ∩ ∆ is exactly the fixed points of f, this is equivalent to requiring that for each fixed point x, the map Dx∆ ⊕ DxF : TxM ⊕ TxM → TxM ⊕ TxM is an isomorphism. We can write the matrix of this map, which is I Dxf. I I Doing a bunch of row and column operations, this is equivalent to requiring that I Dxf 0 I − Dxf is invertible. Thus the condition is equivalent to requiring that 1 is not an eigenvalue of Dxf. The claim is now the following: 90 11 Manifolds and Poincar´e duality III Algebraic Topology Theorem (Lefschetz fixed point theorem). Let M be a compact d-dimensional Z-oriented manifold, and let f : M → M be a map such that the graph Γf and diagonal ∆ intersect transversely. Then Then we have x∈fix(f ) sgn det(I − Dxf ) = (−1)k tr(f ∗ : H i(M ; Q) → H k(M ; Q)). k Proof. We have [Γf ] · [∆(M )] ∈ H0(M × M ; Q). We now want to calculate ε of this. By Poincar´e duality, this is equal to (D−1 M ×M [Γf ] D−1 M ×M [∆(M )])[M × M ] ∈ Q. This is the same as (D−1 M ×M [∆(M )])([Γf ]) = δ(F∗[M ]) = (F ∗δ)[M ], where F : M → M × M is given by F (x) = (x, f (x)). δ = (−1)|ai|ai ⊗ bi. F ∗δ = (−1)|ai|ai ⊗ f ∗bi. f ∗bi = Cijbj. We now use the fact that So we have We write
|
Then we have (F ∗δ)[M ] = i,j (−1)|ai|Cij(ai ⊗ bj)[M ] = (−1)|ai|Cii, i and Cii is just the trace of f ∗. We now compute this product in a different way. As Γf and ∆(M ) are transverse, we know Γf ∩ ∆(M ) is a 0-manifold, and the orientation of Γf and ∆(M ) induces an orientation of it. So we have [Γf ] · [∆(m)] = [Γf ∩ ∆(M )] ∈ H0(M × M ; Q). We know this Γf ∩ ∆(M ) has | fix(f )| many points, so [Γf ∩ ∆(M )] is the sum of | fix(f )| many things, which is what we’ve got on the left above. We have to figure out the sign of each term is actually sgn det(I − Dxf ), and this is left as an exercise on the example sheet. Example. Any map f : CP2n → CP2n has a fixed point. We can’t prove this using the normal fixed point theorem, but we can exploit the ring structure of cohomology to do this. We must have f ∗(x) = λx ∈ H 2(CP2; Q) = Qx. 91 11 Manifolds and Poincar´e duality III Algebraic Topology for some λ ∈ Q. So we must have f ∗(xi) = λixi. We can now very easily compute the right hand side of the fixed point theorem (−1)k tr(f ∗ : H k → H k) = 1 + λ + λ2 + · · · + λ2n. k and this cannot be zero. 92 Index Index n (X), 31 n (X), 31 C U H U R-orientation, 61 manifold, 74 Grk(Rn), 57 γR k,n, 58, 47 n-skeleton, 38 barycenter, 33 barycentric subdivision, 33 boundary
|
, 6 Brouwer’s fixed point theorem, 20 cap product, 77 cell complex, 38 finite, 38 finite-dimensional, 38 subcomplex, 38 cellular cohomology, 41 cellular homology, 40 chain complex, 5 chain homotopy, 29 chain map, 6 characteristic class, 63 characteristic map, 38 coboundary, 6 cochain complex, 5 cochain maps, 6 cocycle, 6 cohomology, 5 with coefficients, 45 cohomology class, 5 compactly-supported cochain, 68 compactly-supported cohomology, 68 cross product, 50 cup product, 47 CW complexes, 38 cycle, 6 degree of map, 19, 83 differentials, 5 direct limit, 69 direct system, 69 directed set, 68 III Algebraic Topology Euler class, 62 exact sequence, 14 exact sequence for relative homology, 16 excision theorem, 17 face of standard simplex, 7 fiber, 56 finite cell complex, 38 finite-dimensional cell complex, 38 fixed point, 20 fixed point theorem Lefschetz, 91 fundamental class, 76 good pair, 37 graded commutative, 48 Grassmannian manifold, 57 Gysin sequence, 66 Hairy ball theorem, 21 homeomorphism orientation preserving, 74 homology, 5 with coefficients, 45 homology class, 5 homotopy, 3 homotopy equivalence, 3 homotopy invariance theorem, 14 homotopy inverse, 3 intersect transversely, 85 intersection product, 86 K¨unneth’s theorem, 52 Lefschetz fixed point theorem, 91 local R-orientation, 61 manifold, 74 local cohomology, 71 local degree, 24 local homology, 23 local orientation, 61 local trivialization, 56 manifold R-orientation, 74 local R-orientation, 74 signature, 83 Euler characteristic, 46 map of pairs, 17 93 Index III Algebraic Topology Mayer-Vietoris theorem, 15 nine lemma, 72 normal bundle, 58 open cell, 38 orientation, 61 manifold, 74 orientation preserving homeomorphism, 74 partition of unity, 58 proper map, 71 pullback vector bundle, 56 quotient bundle, 57 reduced homology, 37 relative homology, 16 section, 56
|
zero, 56 short exact sequence, 15 signature of manifold, 83 singular n-simplex, 8 singular cohomology, 9 singular homology, 9 small simplices theorem, 32, 36 snake lemma, 25 sphere bundle, 66 standard n-simplex, 7 subcomplex, 38 support, 68 cochain, 68 tangent bundle, 58 Thom class, 62 transverse intersection, 85 tubular neighbourhood theorem, 58 universal coefficient theorem for (co)homology, 54 vector bundle, 56 pullback, 56 quotient, 57 subbundle, 57 Whitney sum, 57 vector sub-bundle, 57 weak topology, 38 Whitney sum of vector bundles, 57 zero section, 56 94be invertible, i.e. a unit, as they cannot all lie in the Jacobson radical. We may wlog assume α1 ◦ β1 is a ∼= M 1. unit. If this is the case, then both α1 and β1 have to be invertible. So M1 Consider the map id −θ = φ, where θ : M α−1 1 M1 M 1 M m i=2 Mi M. Then φ(M 1) = M1. So φ|M 1 looks like α1. Also m φ i=2 Mi = m i=2 Mi, So φ|m surjective. However, if φ(x) = 0, this says x = θ(x), So i=2 Mi looks like the identity map. So in particular, we see that φ is x ∈ m i=2 Mi. 22 1 Artinian algebras III Algebras But then θ(x) = 0. Thus x = 0. Thus φ is an automorphism of m with φ(M 1) = φ(M1). So this gives an isomorphism between m i=2 Mi = M M1 ∼= M M1 ∼= n i=2 M i, and so we are done by induction. Now it remains to prove that the endomorphism rings are local. Recall the following result from linear algebra. Lemma (Fitting). Suppose M is a module with both the ACC and DCC on submodules, and let f ∈ EndA(M ). Then for large enough n, we have M = im f n ⊕ ker f n. Proof. By ACC and
|
DCC, we may choose n large enough so that f n : f n(M ) → f 2n(M ) is an isomorphism, as if we keep iterating f, the image is a descending chain and the kernel is an ascending chain, and these have to terminate. If m ∈ M, then we can write for some m1. Then f n(m) = f 2n(m1) m = f n(m1) + (m − f n(m1)) ∈ im f n + ker f n, im f n ∩ ker f n = ker(f n : f n(M ) → f 2n(M )) = 0. and also So done. Lemma. Suppose M is an indecomposable module satisfying ACC and DCC on submodules. Then B = EndA(M ) is local. Proof. Choose a maximal left ideal of B, say I. It’s enough to show that if x ∈ I, then x is left invertible. By maximality of I, we know B = Bx + I. We write 1 = λx + y, for some λ ∈ B and y ∈ I. Since y ∈ I, it has no left inverse. So it is not an isomorphism. By Fitting’s lemma and the indecomposability of M, we see that ym = 0 for some m. Thus (1 + y + y2 + · · · + ym−1)λx = (1 + y + · · · + ym−1)(1 − y) = 1. So x is left invertible. Corollary. Let A be a left Artinian algebra. Then A has the unique decomposition property. Proof. We know A satisfies the ACC and DCC condition. So AA is a finite direct sum of indecomposables. 23 1 Artinian algebras III Algebras So if A is an Artinian algebra, we know A can be uniquely decomposed as a direct sum of indecomposable projectives, A = Pj. For convenience, we will work with right Artinian algebras and right modules instead of left ones. It turns out that instead of studying projectives in A, we can study idempotent elements instead. Recall that End(AA) ∼=
|
A. The projection onto Pj is achieved by left multiplication by an idempotent ej, Pj = ejA. The fact that the A decomposes as a direct sum of the Pj translates to the condition ej = 1, eiej = 0 for i = j. Definition (Orthogonal idempotent). A collection of idempotents {ei} is orthogonal if eiej = 0 for i = j. The indecomposability of Pj is equivalent to ej being primitive: Definition (Primitive idempotent). An idempotent is primitive if it cannot be expressed as a sum e = e + e, where e, e are orthogonal idempotents, both non-zero. We see that giving a direct sum decomposition of A is equivalent to finding an orthogonal collection of primitive idempotents that sum to 1. This is rather useful, because idempotents are easier to move around that projectives. Our current plan is as follows — given an Artinian algebra A, we can quotient out by J(A), and obtain a semi-simple algebra A/J(A). By Artin–Wedderburn, we know how we can decompose A/J(A), and we hope to be able to lift this decomposition to one of A. The point of talking about idempotents instead is that we know what it means to lift elements. Proposition. Let N be a nilpotent ideal in A, and let f be an idempotent of A/N ≡ ¯A. Then there is an idempotent e ∈ A with f = ¯e. In particular, we know J(A) is nilpotent, and this proposition applies. The proof involves a bit of magic. Proof. We consider the quotients A/N i for i ≥ 1. We will lift the idempotents successively as we increase i, and since N is nilpotent, repeating this process will eventually land us in A. Suppose we have found an idempotent fi−1 ∈ A/N i−1 with ¯fi−1 = f. We want to find fi ∈ A/N i such that ¯fi = f. For i > 1, we let x be an element of A/N i with
|
image fi−1 in A/N i−1. Then since x2 − x vansishes in A/N i−1, we know x2 − x ∈ N i−1/N i. Then in particular, (x2 − x)2 = 0 ∈ A/N i. (†) 24 1 Artinian algebras III Algebras We let fi = 3x2 − 2x3. Then by a direct computation using (†), we find f 2 i = fi, and fi has image 3fi−1 − 2fi−1 = fi−1 in A/N i−1 (alternatively, in characteristic p, we can use fi = xp). Since N k = 0 for some k, this process gives us what we want. Just being able to lift idempotents is not good enough. We want to lift decompositions as projective indecomposables. So we need to do better. Corollary. Let N be a nilpotent ideal of A. Let ¯1 = f1 + · · · + fr with {fi} orthogonal primitive idempotents in A/N. Then we can write 1 = e1 + · · · + er, with {ei} orthogonal primitive idempotents in A, and ¯ei = fi. Proof. We define a sequence e i ∈ A inductively. We set e 1 = 1. Then for each i > 1, we pick e inductive hypothesis we know that fi + · · · + ft ∈ e i a lift of fi + · · · + ft ∈ e i−1Ae i−1Ae i−1/N. Then i−1, since by e ie i+1 = e i+1 = e i+1e i. We let Then Also, if j > i, then and so Similarly ejei = 0. ei = e i − e i+1. ¯ei = fi. ej = e i+1eje i+1, eiej = (e i − e i+1)e i+1eje i+1 = 0. We now apply this lifting of idempotents to N = J(A), which we know is nilpotent. We know A/N is the direct sum of simple modules, and thus the decomposition corresponds to ¯1
|
= f1 + · · · + ft ∈ A/J(A), and these fi are orthogonal primitive idempotents. Idempotent lifting then gives 1 = e1 + · · · + et ∈ A, and these are orthogonal primitive idempotents. So we can write A = ejA = Pi, 25 1 Artinian algebras III Algebras where Pi = eiA are indecomposable projectives, and Pi/PiJ(A) = Si is simple. By Krull–Schmidt, any indecomposable projective isomorphic to one of these Pj. The final piece of the picture is to figure out when two indecomposable projectives lie in the same block. Recall that if M is a right A-module and e is idempotent, then M e ∼= HomA(eA, M ). In particular, if M = f A for some idempotent f, then Hom(eA, f A) ∼= f Ae. However, if e and f are in different blocks, say B1 and B2, then f Ae ∈ B1 ∩ B2 = 0, since B1 and B2 are (two-sided!) ideals. So we know Hom(eA, f A) = 0. So if Hom(eA, f A) = 0, then they are in the same block. The existence of a homomorphism can alternatively be expressed in terms of composition factors. We have seen that each indecomposable projective P has a simple “top” P/P J(A) ∼= S. Definition (Composition factor). A simple module S is a composition factor of a module M if there are submodules M1 ≤ M2 with M2/M1 ∼= S. Suppose S is a composition factor of a module M. Then we have a diagram P S 0 M2 So by definition of projectivity, we obtain a non-zero diagonal map P → M2 ≤ M as shown. Lemma. Let P be an indecomposable projective, and M an A-module. Then Hom(P, M ) = 0 iff P/P J(A) is a composition factor of M. Proof. We have proven �
|
�. Conversely, suppose there is a non-zero map f : P → M. Then it factors as S = P P J(A) → im f (im f )J(A). Now we cannot have im f = (im f )J(A), or else we have im f = (im f )J(A)n = 0 for sufficiently large n since J(A) is nilpotent. So this map must be injective, hence an isomorphism. So this exhibits S as a composition factor of M. We define a (directed) graph whose vertices are labelled by indecomposable projectives, and there is an edge P1 → P2 if the top S1 of P1 is a composition factor of P2. 26 1 Artinian algebras III Algebras Theorem. Indecomposable projectives P1 and P2 are in the same block if and only if they lie in the same connected component of the graph. Proof. It is clear that P1 and P2 are in the same connected component, then they are in the same block. Conversely, consider a connected component X, and consider I = P. P ∈X We show that this is in fact a left ideal, hence an ideal. Consider any x ∈ A. Then for each P ∈ X, left-multiplication gives a map P → A, and if we decompose A = Pi, then this can be expressed as a sum of maps fi : P → Pi. Now such a map can be non-zero only if P is a composition factor of Pi. So if fi = 0, then Pi ∈ X. So left-multiplication by x maps I to itself, and it follows that I is an ideal. 1.5 K0 We now briefly talk about the notion of K0. Definition (K0). For any associative k-algebra A, consider the free abelian group with basis labelled by the isomorphism classes [P ] of finitely-generated projective A-modules. Then introduce relations [P1] + [P2] = [P1 ⊕ P2], This yields an abelian group which is the quotient of the free abelian group by the subgroup generated by [P1] + [P2] − [
|
P1 ⊕ P2]. The abelian group is K0(A). Example. If A is an Artinian algebra, then we know that any finitely-generated projective is a direct sum of indecomposable projectives, and this decomposition is unique by Krull-Schmidt. So K0(A) = abelian group generated by the isomorphism classes of indecomposable projectives. So K0(A) ∼= Zr, where r is the number of isomorphism classes of indecomposable projectives, which is the number of isomorphism classes of simple modules. Here we’re using the fact that two indecomposable projectives are isomorphic iff their simple tops are isomorphic. It turns out there is a canonical map K0(A) → A/[A, A]. Recall we have met A/[A, A] when we were talking about the number of simple modules. We remarked that it was the 0th Hochschild homology group, and when A = kG, there is a k-basis of A/[A, A] given by gi + [A, A], where gi are conjugacy class representatives. 27 1 Artinian algebras III Algebras To construct this canonical map, we first look at the trace map Mn(A) → A/[A, A]. This is a k-linear map, invariant under conjugation. We also note that the canonical inclusion Mn(A) → Mn+1(A) X → X 0 0 0 is compatible with the trace map. We observe that the trace induces an isomorphism Mn(A) [Mn(A), Mn(A)] → A [A, A], by linear algebra. Now if P is finitely generated projective. It is a direct summand of some An. Thus we can write An = P ⊕ Q, for P, Q projective. Moreover, projection onto P corresponds to an idempotent e in Mn(A) = EndA(An), and that and we have P = e(An). EndA(P ) = eMn(A)e. Any other choice of idempotent yields an idempotent e1 conjugate to e in M2n(A). Therefore the
|
trace of an endomorphism of P is well-defined in A/[A, A], independent of the choice of e. Thus we have a trace map EndA(P ) → A/[A, A]. In particular, the trace of the identity map on P is the trace of e. We call this the trace of P. Note that if we have finitely generated projectives P1 and P2, then we have P1 ⊕ Q1 = An P2 ⊕ Q2 = Am Then we have So we deduce that (P1 ⊕ P2) ⊕ (Q1 ⊕ Q2) = Am+n. tr(P1 ⊕ P2) = tr P1 + tr P2. Definition (Hattori-Stallings trace map). The map K0(A) → A/[A, A] induced by the trace is the Hattori–Stallings trace map. 28 1 Artinian algebras III Algebras Example. Let A = kG, and G be finite. Then A/[A, A] is a k-vector space with basis labelled by a set of conjugacy class representatives {gi}. Then we know, for a finitely generated projective P, we can write tr P = rP (gi)gi, where the rp(gi) may be regarded as class functions. However, P may be regarded as a k-vector space. So the there is a trace map EndK(P ) → k, and also the “character” χp : G → k, where χP (g) = tr g. Hattori proved that if CG(g) is the centralizer of g ∈ G, then χp(g) = |CG(G)|rp(g−1). (∗) If char k = 0 and k is is algebraically closed, then we know kG is semi-simple. So every finitely generated projective is a direct sum of simples, and with r the number of simples, and (∗) implies that the trace map K0(kG) ∼= Zr Zr ∼= K0(kG) → kG [kG, kG] ∼= kr is the
|
natural inclusion. This is the start of the theory of algebraic K-theory, which is a homology theory telling us about the endomorphisms of free A-modules. We can define K1(A) to be the abelianization of GL(A) = lim n→∞ GLn(A). K2(A) tells us something about the relations required if you express GL(A) in terms of generators and relations. We’re being deliberately vague. These groups are very hard to compute. Just as we saw in the i = 0 case, there are canonical maps Ki(A) → HHi(A), where HH∗ is the Hochschild homology. The i = 1 case is called the Dennis trace map. These are analogous to the Chern maps in topology. 29 2 Noetherian algebras III Algebras 2 Noetherian algebras 2.1 Noetherian algebras In the introduction, we met the definition of Noetherian algebras. Definition (Noetherian algebra). An algebra is left Noetherian if it satisfies the ascending chain condition (ACC ) on left ideals, i.e. if I1 ≤ I2 ≤ I3 ≤ · · · is an ascending chain of left ideals, then there is some N such that IN +m = IN for all m ≥ 0. Similarly, we say an algebra is Noetherian if it is both left and right Noethe- rian. We’ve also met a few examples. Here we are going to meet lots more. In fact, most of this first section is about establishing tools to show that certain algebras are Noetherian. One source of Noetherian algebras is via constructing polynomial and power series rings. Recall that in IB Groups, Rings and Modules, we proved the Hilbert basis theorem: Theorem (Hilbert basis theorem). If A is Noetherian, then A[X] is Noetherian. Note that our proof did not depend on A being commutative. The same proof works for non-commutative rings. In particular, this tells us k[X1, · · ·, Xn] is Noetherian. It is also true that power series rings of Noetherian algebras are
|
also Noetherian. The proof is very similar, but for completeness, we will spell it out completely. Theorem. Let A be left Noetherian. Then A[[X]] is Noetherian. Proof. Let I be a left ideal of A[[X]]. We’ll show that if A is left Noetherian, then I is finitely generated. Let Jr = {a : there exists an element of I of the form aX r + higher degree terms}. We note that Jr is a left ideal of A, and also note that J0 ≤ J1 ≤ J2 ≤ J3 ≤ · · ·, as we can always multiply by X. Since A is left Noetherian, this chain terminates at JN for some N. Also, J0, J1, J2, · · ·, JN are all finitely generated left ideals. We suppose ai1, · · ·, aisi generates Ji for i = 1, · · ·, N. These correspond to elements fij(X) = aijX j + higher odder terms ∈ I. We show that this finite collection generates I as a left ideal. Take f (X) ∈ I, and suppose it looks like with bn = 0. bnX n + higher terms, 30 2 Noetherian algebras III Algebras Suppose n < N. Then bn ∈ J n, and so we can write So bn = cnjanj. f (X) − cnjfnj(X) ∈ I has zero coefficient for X n, and all other terms are of higher degree. Repeating the process, we may thus wlog n ≥ N. We get f (X) of the form dN X N + higher degree terms. The same process gives f (X) − cN jfN j(X) with terms of degree N + 1 or higher. We can repeat this yet again, using the fact JN = JN +1, so we obtain f (X) − cN jfN j(x) − dN +1,jXfN j(X) + · · ·. So we find f (X) = ej(X)fN j(X) for some ej(X). So f is in the left
|
ideal generated by our list, and hence so is f. Example. It is straightforward to see that quotients of Noetherian algebras are Noetherian. Thus, algebra images of the algebras A[x] and A[[x]] would also be Noetherian. For example, finitely-generated commutative k-algebras are always Noetherian. Indeed, if we have a generating set xi of A as a k-algebra, then there is an algebra homomorphism k[X1, · · ·, Xn] Xi A xi We also saw previously that Example. Any Artinian algebra is Noetherian. The next two examples we are going to see are less obviously Noetherian, and proving that they are Noetherian takes some work. Definition (nth Weyl algebra). The nth Weyl algebra An(k) is the algebra generated by X1, · · ·, Xn, Y1, · · ·, Yn with relations for all i, and everything else commutes. YiXi − XiYi = 1, This algebra acts on the polynomial algebra k[X1, · · ·, Xn] with Xi acting by left multiplication and Yi = ∂. Thus k[X1, · · ·, Xn] is a left An(k) module. ∂Xi This is the prototype for thinking about differential algebras, and D-modules in general (which we will not talk about). The other example we have is the universal enveloping algebra of a Lie algebra. 31 2 Noetherian algebras III Algebras Definition (Universal enveloping algebra). Let g be a Lie algebra over k, and take a k-vector space basis x1, · · ·, xn. We form an associative algebra with generators x1, · · ·, xn with relations xixj − xjxi = [xi, xj], and this is the universal enveloping algebra U(g). Example. If g is abelian, i.e. [xi, xj] = 0 in g, then the enveloping algebra is the polynomial algebra in x1, · · ·, xn. Example. If g = sl2(k), then we have a basis 1. They satisfy [e
|
, f ] = h, [h, e] = 2e, [h, f ] = −2f, To prove that An(k) and U(g) are Noetherian, we need some machinery, that involves some “deformation theory”. The main strategy is to make use of a natural filtration of the algebra. Definition (Filtered algebra). A (Z-)filtered algebra A is a collection of k-vector spaces · · · ≤ A−1 ≤ A0 ≤ A1 ≤ A2 ≤ · · · such that Ai · Aj ⊆ Ai+j for all i, j ∈ Z, and 1 ∈ A0. For example a polynomial ring is naturally filtered by the degree of the polynomial. The definition above was rather general, and often, we prefer to talk about more well-behaved filtrations. Definition (Exhaustive filtration). A filtration is exhaustive if Ai = A. Definition (Separated filtration). A filtration is separated if Ai = {0}. Unless otherwise specified, our filtrations are exhaustive and separated. For the moment, we will mostly be interested in positive filtrations. Definition (Positive filtration). A filtration is positive if Ai = 0 for i < 0. Our canonical source of filtrations is the following construction: Example. If A is an algebra generated by x1, · · ·, xn, say, we can set – A0 is the k-span of 1 – A1 is the k-span of 1, x1, · · ·, xn – A1 is the k-span of 1, x1, · · ·, xn, xixj for i, j ∈ {1, · · ·, n}. In general, Ar is elements that are of the form of a (non-commutative) polynomial expression of degree ≤ r. 32 2 Noetherian algebras III Algebras Of course, the filtration depends on the choice of the generating set. Often, to
|
understand a filtered algebra, we consider a nicer object, known as the associated graded algebra. Definition (Associated graded algebra). Given a filtration of A, the associated graded algebra is the vector space direct sum gr A = Ai Ai−1. This is given the structure of an algebra by defining multiplication by (a + Ai−1)(b + Aj−1) = ab + Ai+j−1 ∈ Ai+j Ai+j−1. In our example of a finitely-generated algebra, the graded algebra is generated by x1 + A0, · · ·, xn + A0 ∈ A1/A0. The associated graded algebra has the natural structure of a graded algebra: Definition (Graded algebra). A (Z-)graded algebra is an algebra B that is of the form B = Bi, i∈Z where Bi are k-subspaces, and BiBj ⊆ Bi+j. The Bi’s are called the homogeneous components. A graded ideal is an ideal of the form Ji, where Ji is a subspace of Bi, and similarly for left and right ideals. There is an intermediate object between a filtered algebra and its associated graded algebra, known as the Rees algebra. Definition (Rees algebra). Let A be a filtered algebra with filtration {Ai}. Then the Rees algebra Rees(A) is the subalgebra AiT i of the Laurent polynomial algebra A[T, T −1] (where T commutes with A). Since 1 ∈ A0 ⊆ A1, we know T ∈ Rees(A). The key observation is that – Rees(A)/(T ) ∼= gr A. – Rees(A)/(1 − T ) ∼= A. Since An(k) and U(g) are finitely-generated algebras, they come with a natural filtering induced by the generating set. It turns out, in both cases, the associated graded algebras are pretty simple. Example. Let A = An(k), with generating set X1, · · ·, Xn and Y1, · · ·, Yn. We take the filtration
|
as for a finitely-generated algebra. Now observe that if ai ∈ Ai, and aj ∈ Aj, then So we see that gr A is commutative, and in fact aiaj − ajai ∈ Ai+j−2. gr An(k) ∼= k[ ¯X1, · · ·, ¯Xn, ¯Y1, · · ·, ¯Yn], where ¯Xi, ¯Yi are the images of Xi and Yi in A1/A0 respectively. This is not hard to prove, but is rather messy. It requires a careful induction. 33 2 Noetherian algebras III Algebras Example. Let g be a Lie algebra, and consider A = U(g). This has generating set x1, · · ·, xn, which is a vector space basis for g. Again using the filtration for finitely-generated algebras, we get that if ai ∈ Ai and aj ∈ Aj, then aiaj − ajai ∈ Ai+j−1. So again gr A is commutative. In fact, we have gr A ∼= k[¯x1, · · ·, ¯xn]. The fact that this is a polynomial algebra amounts to the same as the Poincar´eBirkhoff-Witt theorem. This gives a k-vector space basis for U(g). In both cases, we find that gr A are finitely-generated and commutative, and therefore Noetherian. We want to use this fact to deduce something about A itself. Lemma. Let A be a positively filtered algebra. If gr A is Noetherian, then A is left Noetherian. By duality, we know that A is also right Noetherian. Proof. Given a left ideal I of A, we can form gr I = I ∩ Ai I ∩ Ai−1, where I is filtered by {I ∩ Ai}. By the isomorphism theorem, we know I ∩ Ai I ∩ Ai−1 ∼= I ∩ Ai + Ai−1 Ai−1 ⊆ Ai Ai−1. Then gr I is a left graded ideal of gr A. Now suppose we have a strictly ascending chain I
|
1 < I2 < · · · of left ideals. Since we have a positive filtration, for some Ai, we have I1 ∩ Ai I2 ∩ Ai and I1 ∩ Ai−1 = I2 ∩ Ai−1. Thus gr I1 gr I2 gr I3 · · ·. This is a contradiction since gr A is Noetherian. So A must be Noetherian. Where we need positivity is the existence of that transition from equality to non-equality. If we have a Z-filtered algebra instead, then we need to impose some completeness assumption, but we will not go into that. Corollary. An(k) and U(g) are left/right Noetherian. Proof. gr An(k) and gr U(g) are commutative and finitely generated algebras. 34 2 Noetherian algebras III Algebras Note that there is an alternative filtration for An(k) yielding a commutative associated graded algebra, by setting A0 = k[X1, · · ·, Xn] and A1 = k[X1, · · ·, Xn] + n j=1 k[X1, · · ·, Xn]Yj, i.e. linear terms in the Y, and then keep on going. Essentially, we are filtering on the degrees of the Yi only. This also gives a polynomial algebra as an associative graded algebra. The main difference is that when we take the commutator, we don’t go down by two degrees, but one only. Later, we will see this is advantageous when we want to get a Poisson bracket on the associated graded algebra. We can look at further examples of Noetherian algebras. Example. The quantum plane kq[X, Y ] has generators X and Y, with relation XY = qY X for some q ∈ k×. This thing behaves differently depending on whether q is a root of unity or not. This quantum plane first appeared in mathematical physics. Example. The quantum torus kq[X, X −1, Y, Y −1] has generators X, X −1, Y, Y −1 with relations XX −1 = Y Y −1
|
= 1, XY = qY X. The word “quantum” in this context is usually thrown around a lot, and doesn’t really mean much apart from non-commutativity, and there is very little connection with actual physics. These algebras are both left and right Noetherian. We cannot prove these by filtering, as we just did. We will need a version of Hilbert’s basis theorem which allows twisting of the coefficients. This is left as an exercise on the second example sheet. In the examples of An(k) and U(g), the associated graded algebras are commutative. However, it turns out we can still capture the non-commutativity of the original algebra by some extra structure on the associated graded algebra. So suppose A is a (positively) filtered algebra whose associated graded algebra gr A is commutative. Recall that the filtration has a corresponding Rees algebra, and we saw that Rees A/(T ) ∼= gr A. Since gr A is commutative, this means [Rees A, Rees A] ⊆ (T ). This induces a map Rees(A) × Rees(A) → (T )/(T 2), sending (r, s) → T 2 + [r, s]. Quotienting out by (T ) on the left, this gives a map gr A × gr A → (T ) (T 2). We can in fact identify the right hand side with gr A as well. Indeed, the map gr A ∼= Rees(A) (T ) mult. by T (T ) (T 2), 35 2 Noetherian algebras III Algebras is an isomorphism of gr A ∼= Rees A/(T )-modules. We then have a bracket { ·, · } : gr A × gr A gr A. (¯r, ¯s) {r, s} Note that in our original filtration of the Weyl algebra An(k), since the commutator brings us down by two degrees, this bracket vanishes identically, but using the alternative filtration does not give a non-zero { ·, · }. This { ·, · } is an example of a Poisson bracket. Definition
|
(Poisson algebra). An associative algebra B is a Poisson algebra if there is a k-bilinear bracket { ·, · } : B × B → B such that – B is a Lie algebra under { ·, · }, i.e. {r, s} = −{s, r} and {{r, s}, t} + {{s, t}, r} + {{t, r}, s} = 0. – We have the Leibnitz rule {r, st} = s{r, t} + {r, s}t. The second condition says {r, · } : B → B is a derivation. 2.2 More on An(k) and U(g) Our goal now is to study modules of An(k). The first result tells us we must focus on infinite dimensional modules. Lemma. Suppose char k = 0. Then An(k) has no non-zero modules that are finite-dimensional k-vector spaces. Proof. Suppose M is a finite-dimensional module. Then we’ve got an algebra homomorphism θ : An(k) → Endk(M ) ∼= Mm(k), where m = dimk M. In An(k), we have Y1X1 − X1Y1 = 1. Applying the trace map, we know tr(θ(Y1)θ(X1) − θ(X1)θ(Y1)) = tr I = m. But since the trace is cyclic, the left hand side vanishes. So m = 0. So M is trivial. A similar argument works for the quantum torus, but using determinants instead. We’re going to make use of our associated graded algebras from last time, which are isomorphic to polynomial algebras. Given a filtration {Ai} of A, we may filter a module with generating set S by setting Mi = AiS. 36 2 Noetherian algebras III Algebras Note that which allows us to form an associated graded module AjMi ⊆ Mi+j, gr M = Mi Mi+1. This is a graded gr A-module, which is finitely-generated if M is. So we’
|
ve got a finitely-generated graded module over a graded commutative algebra. To understand this further, we prove some results about graded modules over commutative algebras, which is going to apply to our gr A and gr M. Definition (Poincar´e series). Let V be a graded module over a graded algebra S, say ∞ V = Vi. Then the Poincar´e series is i=0 P (V, t) = ∞ (dim Vi)ti. i=0 Theorem (Hilbert-Serre theorem). The Poincar´e series P (V, t) of a finitelygenerated graded module ∞ Vi V = over a finitely-generated generated commutative algebra i=0 S = ∞ i=0 Si with homogeneous generating set x1, · · ·, xm is a rational function of the form f (t) (1 − tki), where f (t) ∈ Z[t] and ki is the degree of the generator xi. Proof. We induct on the number m of generators. If m = 0, then S = S0 = k, and V is therefore a finite-dimensional k-vector space. So P (V, t) is a polynomial. Now suppose m > 0. We assume the theorem is true for < m generators. Consider multiplication by xm. This gives a map xm Vi Vi+km, and we have an exact sequence 0 Ki xm Vi Vi+km Li+km → 0, (∗) where K = Ki = ker(xm : V → V ) 37 2 Noetherian algebras III Algebras and L = Li+km = coker(xm : V → V ). Then K is a graded submodule of V and hence is a finitely-generated S-module, using the fact that S is Noetherian. Also, L = V /xmV is a quotient of V, and it is thus also finitely-generated. Now both K and L are annihilated by xm. So they may be regarded as S0[x1, · · ·, xm−1]-modules. Applying dimk to (∗), we know dimk(Ki) − dimk(
|
Vi) + dim(Vi+km) − dim(Li+km ) = 0. We multiply by ti+km, and sum over i to get tkmP (K, t) − tkm P (V, t) + P (V, t) − P (L, t) = g(t), where g(t) is a polynomial with integral coefficients arising from consideration of the first few terms. We now apply the induction hypothesis to K and L, and we are done. Corollary. If each k1, · · ·, km = 1, i.e. S is generated by S0 = k and homogeneous elements x1, · · ·, xm of degree 1, then for large enough i, then dim Vi = φ(i) for some polynomial φ(t) ∈ Q[t] of d − 1, where d is the order of the pole of P (V, t) at t = 1. Moreover, i j=0 dim Vj = χ(i), where χ(t) ∈ Q[t] of degree d. Proof. From the theorem, we know that P (V, t) = f (t) (1 − t)d, for some d with f (1) = 0, f ∈ Z[t]. But (1 − t)−1 = 1 + t + t2 + · · · By differentiating, we get an expression (1 − t)− ti. f (t) = a0 + a1t + · · · + asts, If then we get dim Vi = a0 d + i − 1 d − 1 + a1 d + i − 2 d − 1 + · · · + as, where we set r give φ(i) for a polynomial φ(t) ∈ Q[t], valid for i − s > 0. In fact, we have = 0 if r < d − 1, and this expression can be rearranged to d−1 φ(t) = f (1) (d − 1)! td−1 + lower degree term. 38 2 Noetherian algebras III Algebras Since f (1) = 0, this has degree d − 1. This implies that i j=0 dim Vj is a polyn
|
omial in Q[t] of degree d. This φ(t) is the Hilbert polynomial, and χ(t) the Samuel polynomial. Some people call χ(t) the Hilbert polynomial instead, though. We now want to apply this to our cases of gr A, where A = An(k) or U(g), filtered as before. Then we deduce that i 0 dim Mj Mj−1 = χ(i), for a polynomial χ(t) ∈ Q[t]. But we also know i j=0 dim Mj Mj−1 = dim Mi. We are now in a position to make a definition. Definition (Gelfand-Kirillov dimension). Let A = An(k) or U(g) and M a finitely-generated A-module, filtered as before. Then the Gelfand-Kirillov dimension d(M ) of M is the degree of the Samuel polynomial of gr M as a gr A-module. This makes sense because gr A is a commutative algebra in this case. A priori, it seems like this depends on our choice of filtering on M, but actually, it doesn’t. For a more general algebra, we can define the dimension as below: Definition (Gelfand-Kirillov dimension). Let A be a finitely-generated k-algebra, which is filtered as before, and a finitely-generated A-module M, filtered as before. Then the GK-dimension of M is d(M ) = lim sup n→∞ log(dim Mn) log n. In the case of A = An(k) or U(g), this matches the previous definition. Again, this does not actually depend on the choice of generating sets. Recall we showed that no non-zero An(k)-module M can have finite dimension as a k-vector space. So we know d(M ) > 0. Also, we know that d(M ) is an integer for cases A = An or U(g), since it is the degree of a polynomial
|
. However, for general M = A, we can get non-integral values. In fact, the values we can get are 0, 1, 2, and then any real number ≥ 2. We can also have ∞ if the lim sup doesn’t exist. Example. If A = kG, then we have GK-dim(kG) < ∞ iff G has a subgroup H of finite index with H embedding into the strictly upper triangular integral matrices, i.e. matrices of the form 1 0 ... 0 ∗ 1... 0 · · · · · ·... · · · ∗ ∗ ... 1. This is a theorem of Gromoll, and is quite hard to prove. 39 2 Noetherian algebras III Algebras Example. We have GK-dim(A) = 0 iff A is finite-dimensional as a k-vector space. We have and in general Indeed, we have GK-dim(k[X]) = 1, GK-dim(k[X1, · · ·, Xn]) = n. dimk(mth homogeneous component) = m + n n. So we have χ(t) = t + n n This is of degree n, with leading coefficient 1 n!. We can make the following definition, which we will not use again: Definition (Multiplicity). Let A be a commutative algebra, and M an A-module. The multiplicity of M with d(M ) = d is d! × leading coefficient of χ(t). On the second example sheet, we will see that the multiplicity is integral. We continue looking at more examples. Example. We have d(An(k)) = 2n, and d(U(g)) = dimk g. Here we are using the fact that the associated graded algebras are polynomial algebras. Example. We met k[X1, · · ·, Xn] as the “canonical” An(k)-module. The filtration of the
|
module matches the one we used when thinking about the polynomial algebra as a module over itself. So we get d(k[X1, · · ·, Xn]) = n. Lemma. Let M be a finitely-generated An-module. Then d(M ) ≤ 2n. Proof. Take generators m1, · · ·, ms of M. Then there is a surjective filtered module homomorphism An ⊕ · · · ⊕ An M (a1, · · ·, as) aimi It is easy to see that quotients can only reduce dimension, so GK-dim(M ) ≤ d(An ⊕ · · · ⊕ An). χAn⊕···⊕An = sχAn But has degree 2n. More interestingly, we have the following result: 40 2 Noetherian algebras III Algebras Theorem (Bernstein’s inequality). Let M be a non-zero finitely-generated An(k)-module, and char k = 0. Then d(M ) ≥ n. Definition (Holonomic module). An An(k) module M is holonomic iff d(M ) = n. If we have a holonomic module, then we can quotient by a maximal submodule, and get a simple holonomic module. For a long time, people thought all simple modules are holonomic, until someone discovered a simple module that is not holonomic. In fact, most simple modules are not holonomic, but we something managed to believe otherwise. Proof. Take a generating set and form the canonical filtrations {Ai} of An(k) and {Mi} of M. We let χ(t) be the Samuel polynomial. Then for large enough i, we have χ(i) = dim Mi. We claim that dim Ai ≤ dim Homk(Mi, M2i) = dim Mi × dim M2i. Assuming this, for large enough i, we have But we know dim Ai ≤ χ(i)χ(2i). dim Ai = i + 2 2n, which is a polynomial of degree 2n. But χ(t)χ(2t) is a polynomial of degree 2d(M ).
|
So we get that n ≤ d(M ). So it remains to prove the claim. It suffices to prove that the natural map Ai → Homk(Mi, M2i), given by multiplication is injective. So we want to show that if a ∈ Ai = 0, then aMi = 0. We prove this by induction on i. When i = 0, then A0 = k, and M0 is a finite-dimensional k-vector space. Then the result is obvious. If i > 0, we suppose the result is true for smaller i. We let a ∈ Ai is non-zero. If aMi = 0, then certainly a ∈ k. We express a = cαβX α1 1 X α2 2 · · · X αn n Y β1 1 · · · Y βn n, where α = (α1, · · ·, αn), β = (β1, · · ·, βn), and cα,β ∈ k. If possible, pick a j such that cα,α = 0 for some α with αj = 0 (this happens when there is an X involved). Then [Yj, a] = αjcα,βX α1 1 · · · X αj −1 j 41 · · · X αn n Y β1 1 · · · Y βn n, 2 Noetherian algebras III Algebras and this is non-zero, and lives in Ai−1. If aMi = 0, then certainly aMi−1 = 0. Hence [Yj, a]Mi−1 = (Yja − aYj)Mi−1 = 0, using the fact that YjMi−1 ⊆ Mi. This is a contradiction. If a only has Y ’s involved, then we do something similar using [Xj, a]. There is also a geometric way of doing this. We take k = C. We know gr An is a polynomial algebra gr An = k[ ¯X1, · · ·, ¯Xn, ¯Y1, · · ·, ¯Yn], which may be viewed as the coordinate algebra of the cotangent bundle on affine n-space Cn. The points of this correspond to the maximal ideals of gr An. If I is a left ideal of An(C),
|
then we can form gr I and we can consider the set of maximal ideals containing it. This gives us the characteristic variety Ch(An/I). We saw that there was a Poisson bracket on gr An, and this may be used to define a skew-symmetric form on the tangent space at any point of the cotangent bundle. In this case, this is non-degenerate skew-symmetric form. We can consider the tangent space U of Ch(An/I) at a non-singular point, and there’s a theorem of Gabber (1981) which says that U ⊇ U ⊥, where ⊥ is with respect to the skew-symmetric form. By non-degeneracy, we must have dim U ≥ n, and we also know that dim Ch(An/I) = d(An/I). So we find that d(An/I) ≥ n. In the case of A = U (g), we can think of gr A as the coordinate algebra on g∗, the vector space dual of g. The Poisson bracket leads to a skew-symmetric form on tangent spaces at points of g∗. In this case, we don’t necessarily get non-degeneracy. However, on g, we have the adjoint action of the corresponding Lie group G, and this induces a co-adjoint action on g∗. Thus g∗ is a disjoint union of orbits. If we consider the induced skew-symmetric form on tangent spaces of orbits (at non-singular points), then it is non-degenerate. 2.3 Injective modules and Goldie’s theorem The goal of this section is to prove Goldie’s theorem. Theorem (Goldie’s theorem). Let A be a right Noetherian algebra with no non-zero ideals all of whose elements are nilpotent. Then A embeds in a finite direct sum of matrix algebras over division algebras. The outline of the proof is as follows — given any A, we embed A in an “injective hull” E(A). We will then find that similar to what we did in Artin– Wedderburn, we can decompose End(E(A))
|
into a direct sum of matrix algebras over division algebras. But we actually cannot. We will have to first quotient End(E(A)) by some ideal I. On the other hand, we do not actually have an embedding of A ∼= EndA(A) into End(E(A)). Instead, what we have is only a homomorphism EndA(A) → End(E(A))/I, where we quotient out by the same ideal I. So actually the two of our problems happen to cancel each other out. 42 2 Noetherian algebras III Algebras We will then prove that the kernel of this map contains only nilpotent elements, and then our hypothesis implies this homomorphism is indeed an embedding. We begin by first constructing the injective hull. This is going to involve talking about injective modules, which are dual to the notion of projective modules. Definition (Injective module). An A-module E is injective if for every diagram of A-module maps 0 θ N, ψ M φ E such that θ is injective, there exists a map ψ that makes the diagram commute. Equivalently, Hom( ·, E) is an exact functor. Example. Take A = k. Then all k-vector spaces are injective k-modules. Example. Take A = k[X]. Then k(X) is an injective k[X]-module. Lemma. Every direct summand of an injective module is injective, and direct products of injectives is injective. Proof. Same as proof for projective modules. Lemma. Every A-module may be embedded in an injective module. We say the category of A-modules has enough injectives. The dual result for projectives was immediate, as free modules are projective. Proof. Let M be a right A-module. Then Homk(A, M ) is a right A-module via We claim that Homk(A, M ) is an injective module. Suppose we have (f a)(x) = f (ax). 0 θ N1 M1 φ Homk(A, M ) We consider the k-module diagram 0 N1 θ β M1 α M where α(m1) = φ(m1)(1).
|
Since M is injective as a k-module, we can find the β such that α = βθ. We define ψ : N1 → Homk(A, M ) by ψ(n1)(x) = β(n1x). It is straightforward to check that this does the trick. Also, we have an embedding M → Homk(A, M ) by m → (φn : x → mx). 43 2 Noetherian algebras III Algebras The category theorist will write the proof in a line as HomA( ·, Homk(A, M )) ∼= Homk( · ⊗A A, M ) ∼= Homk( ·, M ), which is exact since M is injective as a k-module. Note that neither the construction of Homk(A, M ), nor the proof that it is injective requires the right A-modules structure of M. All we need is that M is an injective k-module. Lemma. An A-module is injective iff it is a direct summand of every extension of itself. Proof. Suppose E is injective and E is an extension of E. Then we can form the diagram 0 E, ψ E id E and then by injectivity, we can find ψ. So E = E ⊕ ker ψ. Conversely, suppose E is a direct summand of every extension. But by the previous lemma, we can embed E in an injective E. This implies that E is a direct summand of E, and hence injective. There is some sort of “smallest” injective a module embeds into, and this is called the injective hull, or injective envelope. This is why our injectives are called E. The “smallness” will be captured by the fact that it is an essential extension. Definition (Essential submodule). An essential submodule M of an A-module N is one where M ∩ V = {0} for every non-zero submodule V of N. We say N is an essential extension of M. Lemma. An essential extension of an essential extension is essential. Proof. Suppose M < E < F are essential extensions. Then given N ≤ F, we know N ∩ E = {0}, and this is
|
a submodule of E. So (N ∩ E) ∩ M = N ∩ M = 0. So F is an essential extension of M. Lemma. A maximal essential extension is an injective module. Such maximal things exist by Zorn’s lemma. Proof. Let E be a maximal essential extension of M, and consider any embedding E → F. We shall show that E is a direct summand of F. Let S be the set of all non-zero submodules V of F with V ∩ E = {0}. We apply Zorn’s lemma to get a maximal such module, say V1. Then E embeds into F/V1 as an essential submodule. By transitivity of essential extensions, F/V1 is an essential extension of M, but E is maximal. So E ∼= F/V1. In other words, F = E ⊕ V1. 44 2 Noetherian algebras III Algebras We can now make the following definition: Definition (Injective hull). A maximal essential extension of M is the injective hull (or injective envelope) of M, written E(M ). Proposition. Let M be an A-module, with an inclusion M → I into an injective module. Then this extends to an inclusion E(M ) → I. Proof. By injectivity, we can fill in the diagram 0 E(M ). ψ M I We know ψ restricts to the identity on M. So ker ψ ∩ M = {0}. By Since E(M ) is essential, we must have ker ψ = 0. So E(M ) embeds into I. Proposition. Suppose E is an injective essential extension of M. Then E ∼= E(M ). In particular, any two injective hulls are isomorphic. Proof. By the previous lemma, E(M ) embeds into E. But E(M ) is a maximal essential extension. So this forces E = E(M ). Using what we have got, it is not hard to see that Proposition. E(M1 ⊕ M2) = E(M1) ⊕ E(M2). Proof. We know that E(M1) ⊕ E(M2) is also injective (since finite
|
direct sums are the same as direct products), and also M1 ⊕ M2 embeds in E(M1) ⊕ E(M2). So it suffices to prove this extension is essential. Let V ≤ E(M1) ⊕ E(M2). Then either V /E(M1) = 0 or V /E(M2) = 0. We wlog it is the latter. Note that we can naturally view V E(M2) ≤ E(M1) ⊕ E(M2) E(M2) ∼= E(M1). Since M1 ⊆ E(M1) is essential, we know M1 ∩ (V /E(M2)) = 0. So there is some m1 + m2 ∈ V such that m2 ∈ E(M2) and m1 ∈ M1. Now consider {m ∈ E(M2) : am1 + m ∈ V for some a ∈ A}. This is a non-empty submodule of E(M2), and so contains an element of M2, say n. Then we know am1 + n ∈ V ∩ (M1 ⊕ M2), and we are done. The next two examples of injective hulls will be stated without proof: Example. Take A = k[X], and M = k[X]. Then E(M ) = k(X). 45 2 Noetherian algebras III Algebras Example. Let A = k[X] and V = k be the trivial module, where X acts by 0. Then E(V ) = k[X, X −1] Xk[X], which is a quotient of A-modules. We note V embeds in this as V ∼= k[X] Xk[X] → k[X, X −1] Xk[X]. Definition (Uniform module). A non-zero module V is uniform if given non-zero submodules V1, V2, then V1 ∩ V2 = {0}. Lemma. V is uniform iff E(V ) is indecomposable. Proof. Suppose E(V ) = A ⊕ B, with A, B non-zero. Then V ∩ A = {0} and
|
V ∩B = {0} since the extension is essential. So we have two non-zero submodules of V that intersect trivially. Conversely, suppose V is not uniform, and let V1, V2 be non-zero submodules that intersect trivially. By Zorn’s lemma, we suppose these are maximal submodules that intersect trivially. We claim E(V1) ⊕ E(V2) = E(V1 ⊕ V2) = E(V ) To prove this, it suffices to show that V is an essential extension of V1 ⊕ V2, so that E(V ) is an injective hull of V1 ⊕ V2. Let W ≤ V be non-zero. If W ∩ (V1 ⊕ V2) = 0, then V1 ⊕ (V2 ⊕ W ) is a larger pair of submodules with trivial intersection, which is not possible. So we are done. Definition (Domain). An algebra is a domain if xy = 0 implies x = 0 or y = 0. This is just the definition of an integral domain, but when we have non- commutative algebras, we usually leave out the word “integral”. To show that the algebras we know and love are indeed domains, we again do some deformation. Lemma. Let A be a filtered algebra, which is exhaustive and separated. Then if gr A is a domain, then so is A. Proof. Let x ∈ Ai \ Ai−1, and y ∈ Aj \ Aj−1. We can find such i, j for any elements x, y ∈ A because the filtration is exhaustive and separated. Then we have ¯x = x + Ai−1 = 0 ∈ Ai/Ai−1 ¯y = y + Aj−1 = 0 ∈ Aj/Aj−1. If gr A is a domain, then we deduce ¯x¯y = 0. So we deduce that xy ∈ Ai+j−1. In particular, xy = 0. Corollary. An(k) and U(g) are domains. Lemma. Let A be a right Noetherian domain. Then AA is uniform, i.
|
e. E(AA) is indecomposable. 46 2 Noetherian algebras III Algebras Proof. Suppose not, and so there are xA and yA non-zero such that xA ∩ yA = {0}. So xA ⊕ yA is a direct sum. But A is a domain and so yA ∼= A as a right A-module. Thus yxA ⊕ yyA is a direct sum living inside yA. Further decomposing yyA, we find that xA ⊕ yxA ⊕ y2xA ⊕ · · · ⊕ ynxA is a direct sum of non-zero submodules. But this is an infinite strictly ascending chain as n → ∞, which is a contradiction. Recall that when we proved Artin–Wedderburn, we needed to use Krull– Schmidt, which told us the decomposition is unique up to re-ordering. That relied on the endomorphism algebra being local. We need something similar here. Lemma. Let E be an indecomposable injective right module. Then EndA(E) is a local algebra, with the unique maximal ideal given by I = {f ∈ End(E) : ker f is essential}. Note that since E is indecomposable injective, given any non-zero V ≤ E, we know E(V ) embeds into, and hence is a direct summand of E. Hence E(V ) = E. So ker f being essential is the same as saying ker f being non-zero. However, this description of the ideal will be useful later on. Proof. Let f : E → E and ker f = {0}. Then f (E) is an injective module, and so is a direct summand of E. But E is indecomposable. So f is surjective. So it is an isomorphism, and hence invertible. So it remains to show that I = {f ∈ End(E) : ker f is essential} is an ideal. If ker f and ker g are essential, then ker(f + g) ≥ ker f ∩ ker g, and the intersection of essential submodules is essential. So ker(f + g) is also essential. Also, if ker g is essential, and f is arbitrary
|
, then ker(f ◦ g) ≥ ker g, and is hence also essential. So I is a maximal left ideal. The point of this lemma is to allow us to use Krull–Schmidt. Lemma. Let M be a non-zero Noetherian module. Then M is an essential extension of a direct sum of uniform submodules N1, · · ·, Nr. Thus E(M ) ∼= E(N1) ⊕ · · · E(Nr) is a direct sum of finitely many indecomposables. This decomposition is unique up to re-ordering (and isomorphism). Proof. We first show any non-zero Noetherian module contains a uniform one. Suppose not, and M is in particular not uniform. So it contains non-zero V1, V 2 with V1 ∩ V 2 is not uniform by assumption. So it contains non-zero V2 and V 3 with zero intersection. We keep on repeating. Then we get 2 = 0. But V V1 ⊕ V2 ⊕ · · · ⊕ Vn 47 2 Noetherian algebras III Algebras is a strictly ascending chain of submodules of M, which is a contradiction. Now for non-zero Noetherian M, pick N1 uniform in M. Either N1 is essential 2 = 0. We pick 2 non-zero with N1 ∩ N in M, and we’re done, or there is some N N2 uniform in N 2. Then either N1 ⊕ N2 is essential, or... And we are done since M is Noetherian. Taking injective hulls, we get E(M ) = E(N1) ⊕ · · · ⊕ E(Nr), and we are done by Krull–Schmidt and the previous lemma. This is the crucial lemma, which isn’t really hard. This allows us to define yet another dimension for Noetherian algebras. Definition (Uniform dimension). The uniform dimension, or Goldie rank of M is the number of indecomposable direct summands of E(M ). This is analogous to vector space dimensions in some ways. Example. The Goldie rank of domains is 1, as we showed AA is uniform. This
|
is true for An(k) and U(g). Lemma. Let E1, · · ·, Er be indecomposable injectives. Put E = E1 ⊕ · · · ⊕ Er. Let I = {f ∈ EndA(E) : ker f is essential}. This is an ideal, and then EndA(E)/I ∼= Mn1(D1) ⊕ · · · ⊕ Mns (Ds) for some division algebras Di. Proof. We write the decomposition instead as E = En1 1 ⊕ · · · ⊕ Enr r. Then as in basic linear algebra, we know elements of End(E) can be written as an r × r matrix whose (i, j)th entry is an element of Hom(Eni i Now note that if Ei ∼= Ej, then the kernel of a map Ei → Ej is essential in, Enj j ). Ei. So quotienting out by I kills all of these “off-diagonal” entries. Also Hom(Eni i, Eni i ) = Mni(End(Ei)), and so quotienting out by I gives Mni(End(Ei)/{essential kernel}) ∼= Mni(Di), where Di ∼= End(Ei) essential kernel, which we know is a division algebra since I is a maximal ideal. The final piece to proving Goldie’s theorem is the following piece Lemma. If A is a right Noetherian algebra, then any f : AA → AA with ker f essential in AA is nilpotent. Proof. Consider 0 < ker f ≤ ker f 2 ≤ · · ·. Suppose f is not nilpotent. We claim that this is a strictly increasing chain. Indeed, for all n, we have f n(AA) = 0. Since ker f is essential, we know This forces ker f n+1 > ker f n, which is a contradiction. f n(AA) ∩ ker f = {0}. 48 2 Noetherian algebras III Algebras We can now prove Goldie’s theorem. Theorem (Goldie’s theorem). Let A be a right Noetherian algebra with no non-zero ideals all of whose elements are nilpotent. Then A embeds in
|
a finite direct sum of matrix algebras over division algebras. Proof. As usual, we have a map A x EndA(AA) left multiplication by x For a map AA → AA, it lifts to a map E(AA) → E(AA) by injectivity: 0 AA f AA θ θ E(AA) f E(AA) We can complete the diagram to give a map f : E(AA) → E(AA), which restricts to f on AA. This is not necessarily unique. However, if we have two lifts f and f, then the difference f − f has AA in the kernel, and hence has an essential kernel. So it lies in I. Thus, if we compose maps AA EndA(AA) End(E(AA))/I. The kernel of this consists of A which when multiplying on the left has essential kernel. This is an ideal all of whose elements is nilpotent. By assumption, any such ideal vanishes. So we have an embedding of A in End(E(AA))/I, which we know to be a direct sum of matrix algebras over division rings. Goldie didn’t present it like this. This work in injective modules is due to Matlis. We saw that (right Noetherian) domains had Goldie rank 1. So we get that End(E(A))/I ∼= D for some division algebra D. So by Goldie’s theorem, a right Noetherian algebra embeds in a division algebra. In particular, this is true for An(k) and U(g). 49 3 Hochschild homology and cohomology III Algebras 3 Hochschild homology and cohomology 3.1 Introduction We now move on to talk about Hochschild (co)homology. We will mostly talk about Hochschild cohomology, as that is the one that is interesting. Roughly speaking, given a k-algebra A and an A-A-bimodule M, Hochschild cohomology is an infinite sequence of k-vector spaces HH n(A, M ) indexed by n ∈ N associated to the data. While there is in theory an infinite number of such vector spaces, we are mostly going to focus on the cases of n = 0
|
, 1, 2, and we will see that these groups can be interpreted as things we are already familiar with. The construction of these Hochschild cohomology groups might seem a bit arbitrary. It is possible to justify these a priori using the general theory of homological algebra and/or model categories. On the other hand, Hochschild cohomology is sometimes used as motivation for the general theory of homological algebra and/or model categories. Either way, we are not going to develop these general frameworks, but are going to justify Hochschild cohomology in terms of its practical utility. Unsurprisingly, Hochschild (co)homology was first developed by Hochschild in 1945, albeit only working with algebras of finite (vector space) dimension. It was introduced to give a cohomological interpretation and generalization of some results of Wedderburn. Later in 1962/1963, Gerstenhaber saw how Hochschild cohomology was relevant to the deformations of algebras. More recently, it’s been realized that that the Hochschild cochain complex has additional algebraic structure, which allows yet more information about deformation. As mentioned, we will work with A-A-bimodules over an algebra A. If our algebra has an augmentation, i.e. a ring map to k, then we can have a decent theory that works with left or right modules. However, for the sake of simplicity, we shall just work with bimodules to make our lives easier. Recall that a A-A-bimodule is an algebra with both left and right A actions that are compatible. For example, A is an A-A-bimodule, and we sometimes write it as AAA to emphasize this. More generally, we can view A⊗(n+2) = A⊗k · · ·⊗k A as an A-A-bimodule by x(a0 ⊗ a1 ⊗ · · · ⊗ an+1)y = (xa0) ⊗ a1 ⊗ · · · ⊗ (an+1y). The crucial property of this is that for any n ≥ 0, the bimodule A⊗(n+2) is a free A-A-bimodule. For example, A
|
⊗k A is free on a single generator 1 ⊗k 1, whereas if {xi} is a k-basis of A, then A ⊗k A ⊗k A is free on {1 ⊗k xi ⊗k 1}. The general theory of homological algebra says we should be interested in such free things. Definition (Free resolution). Let A be an algebra and M an A-A-bimodule. A free resolution of M is an exact sequence of the form d2 · · · F2 d1 F1 d0 F0 M, where each Fn is a free A-A-bimodule. More generally, we can consider a projective resolution instead, where we allow the bimodules to be projective. In this course, we are only interested in one particular free resolution: 50 3 Hochschild homology and cohomology III Algebras Definition (Hochschild chain complex). Let A be a k-algebra with multiplication map µ : A ⊗ A. The Hochschild chain complex is d1 · · · A ⊗k A ⊗k A d0 A ⊗k A µ A 0. We refer to A⊗k(n+2) as the degree n term. The differential d : A⊗k(n+3) → A⊗k(n+2) is given by d(a0 ⊗k · · · ⊗k an+1) = n+1 (−1)ia0 ⊗k · · · ⊗k aiai+1 ⊗k · · · ⊗k an+2. i=0 This is a free resolution of AAA (the exactness is merely a computation, and we shall leave that as an exercise to the reader). In a nutshell, given an A-Abimodule M, its Hochschild homology and cohomology is obtained by applying · ⊗A-A M and HomA-A( ·, M ) to the Hochschild chain complex, and then taking the homology and cohomology of the resulting chain complex. We shall explore in more detail what this means. It is a general theorem that we could have applied the functors · ⊗A-
|
A M and HomA-A( ·, M ) to any projective resolution of AAA and take the (co)homology, and the resulting vector spaces will be the same. However, we will not prove that, and will just always stick to this standard free resolution all the time. 3.2 Cohomology As mentioned, the construction of Hochschild cohomology involves applying HomA-A( ·, M ) to the Hochschild chain complex, and looking at the terms HomA-A(A⊗(n+2), M ). This is usually not very convenient to manipulate, as it involves talking about bimodule homomorphisms. However, we note that A⊗(n+2) is a free A-A-bimodule generated by a basis of A⊗n. Thus, there is a canonical isomorphism HomA-A(A⊗(n+2), M ) ∼= Homk(A⊗n, M ), and k-linear maps are much simpler to work with. Definition (Hochschild cochain complex). The Hochschild cochain complex of an A-A-bimodule M is what we obtain by applying HomA-A( ·, M ) to the Hochschild chain complex of A. Explicitly, we can write it as Homk(k, M ) δ0 Homk(A, M ) δ1 Homk(A ⊗ A, M ) · · ·, where (δ0f )(a) = af (1) − f (1)a (δ1f )(a1 ⊗ a2) = a1f (a2) − f (a1a2) + f (a1)a2 (δ2f )(a1 ⊗ a2 ⊗ a3) = a1f (a2 ⊗ a3) − f (a1a2 ⊗ a3) + f (a1 ⊗ a2a3) − f (a1 ⊗ a2)a3 (δn−1f )(a1 ⊗ · · · ⊗ an) = a1f (a2 ⊗ · · · ⊗ an) + n i=1 (−1)if (a1 �
|
�� · · · ⊗ aiai+1 ⊗ · · · ⊗ an) + (−1)n+1f (a1 ⊗ · · · ⊗ an−1)an 51 3 Hochschild homology and cohomology III Algebras The reason the end ones look different is that we replaced HomA-A(A⊗(n+2), M ) with Homk(A⊗n, M ). The crucial observation is that the exactness of the Hochschild chain complex, and in particular the fact that d2 = 0, implies im δn−1 ⊆ ker δn. Definition (Cocycles). The cocycles are the elements in ker δn. Definition (Coboundaries). The coboundaries are the elements in im δn. These names came from algebraic topology. Definition (Hochschild cohomology groups). We define HH 0(A, M ) = ker δ0 ker δn im δn−1 HH n(A, N ) = These are k-vector spaces. If we do not want to single out HH 0, we can extend the Hochschild cochain complex to the left with 0, and setting δn = 0 for n < 0 (or equivalently extending the Hochschild chain complex to the right similarly), Then HH 0(A, M ) = ker δ0 im δ−1 = ker δ0. The first thing we should ask ourselves is when the cohomology groups vanish. There are two scenarios where we can immediately tell that the (higher) cohomology groups vanish. Lemma. Let M be an injective bimodule. Then HH n(A, M ) = 0 for all n ≥ 1. Proof. HomA-A( ·, M ) is exact. Lemma. If AAA is a projective bimodule, then HH n(A, M ) = 0 for all M and all n ≥ 1. If we believed the previous remark that we could compute Hochschild cohomology with any projective resolution, then this result is immediate — indeed, we can use · · · → 0 → 0 → A → A → 0 as the projective
|
resolution. However, since we don’t want to prove such general results, we shall provide an explicit computation. Proof. If AAA is projective, then all A⊗n are projective. At each degree n, we can split up the Hochschild chain complex as the short exact sequence 0 A⊗(n+3) ker dn dn A⊗(n+2) dn−1 im dn−1 0 The im d is a submodule of A⊗(n+1), and is hence projective. So we have A⊗(n+2) ∼= A⊗(n+3) ker dn ⊕ im dn−1, 52 3 Hochschild homology and cohomology III Algebras and we can write the Hochschild chain complex at n as ker dn ⊕ A⊗(n+3) ker dn dn A⊗(n+3) ker dn ⊕ im dn−1 dn−1 A⊗(n+1) im dn−1 ⊕ im dn−1 (a, b) (b, 0) (c, d) (0, d) Now Hom( ·, M ) certainly preserves the exactness of this, and so the Hochschild cochain complex is also exact. So we have HH n(A, M ) = 0 for n ≥ 1. This is a rather strong result. By knowing something about A itself, we deduce that the Hochschild cohomology of any bimodule whatsoever must vanish. Of course, it is not true that HH n(A, M ) vanishes in general for n ≥ 1, or else we would have a rather boring theory. In general, we define Definition (Dimension). The dimension of an algebra A is Dim A = sup{n : HH n(A, M ) = 0 for some A-A-bimodule M }. This can be infinite if such a sup does not exist. Thus, we showed that AAA embeds as a direct summand in A ⊗ A, then Dim A = 0. Definition (k-separable). An algebra A is k-separable if AAA embeds as a direct summand
|
of A ⊗ A. Since A ⊗ A is a free A-A-bimodule, this condition is equivalent to A being projective. However, there is some point in writing the definition like this. Note that an A-A-bimodule is equivalently a left A ⊗ Aop-module. Then AAA is a direct summand of A ⊗ A if and only if there is a separating idempotent e ∈ A ⊗ Aop so that AAA viewed as A ⊗ Aop-bimodule is (A ⊗ Aop)e. This is technically convenient, because it is often easier to write down a separating idempotent than to prove directly A is projective. Note that whether we write A⊗Aop or A⊗A is merely a matter of convention. They have the same underlying set. The notation A ⊗ A is more convenient when we take higher powers, but we can think of A ⊗ Aop as taking A as a left-A right-k module and Aop as a left-k right-A, and tensoring them gives a A-A-bimodule. We just proved that separable algebras have dimension 0. Conversely, we have Lemma. If Dim A = 0, then A is separable. Proof. Note that there is a short exact sequence 0 ker µ A ⊗ A µ A 0 If we can show this splits, then A is a direct summand of A ⊗ A. To do so, we need to find a map A ⊗ A → ker µ that restricts to the identity on ker µ. 53 3 Hochschild homology and cohomology III Algebras To do so, we look at the first few terms of the Hochschild chain complex · · · d im d ⊕ ker µ A ⊗ A µ A 0. By assumption, for any M, applying HomA-A( ·, M ) to the chain complex gives an exact sequence. Omitting the annoying A-A subscript, this sequence looks like 0 −→ Hom(A, M ) µ∗ −→ Hom(A ⊗ A, M ) (∗) −→ Hom(ker µ, M ) ⊕ Hom(im d,
|
M ) d∗ −→ · · · Now d∗ sends Hom(ker µ, M ) to zero. So Hom(ker µ, M ) must be in the image of (∗). So the map Hom(A ⊗ A, M ) −→ Hom(ker µ, M ) must be surjective. This is true for any M. In particular, we can pick M = ker µ. Then the identity map idker µ lifts to a map A ⊗ A → ker µ whose restriction to ker µ is the identity. So we are done. Example. Mn(k) is separable. It suffices to write down the separating idempotent. We let Eij be the elementary matrix with 1 in the (i, j)th slot and 0 otherwise. We fix j, and then Eij ⊗ Eji ∈ A ⊗ Aop i is a separating idempotent. Example. Let A = kG with char k |G|. Then A ⊗ Aop = kG ⊗ (kG)op ∼= kG ⊗ kG. But this is just isomorphic to k(G × G), which is again semi-simple. Thus, as a bimodule, A ⊗ A is completely reducible. So the quotient of AAA is a direct summand (of bimodules). So we know that whenever char k |G|, then kG is k-separable. The obvious question is — is this notion actually a generalization of separable field extensions? This is indeed the case. Fact. Let L/K be a finite field extension. Then L is separable as a K-algebra iff L/K is a separable field extension. However k-separable algebras must be finite-dimensional k-vector spaces. So this doesn’t pass on to the infinite case. In the remaining of the chapter, we will study what Hochschild cohomology in the low dimensions mean. We start with HH 0. The next two propositions follow from just unwrapping the definitions: Proposition. We have HH 0(A, M ) = {m ∈ M : am − ma = 0
|
for all a ∈ A}. In particular, HH 0(A, A) is the center of A. 54 3 Hochschild homology and cohomology III Algebras Proposition. ker δ1 = {f ∈ Homk(A, M ) : f (a1a2) = a1f (a2) + f (a1)a2}. These are the derivations from A to M. We write this as Der(A, M ). On the other hand, im δ0 = {f ∈ Homk(A, M ) : f (a) = am − ma for some m ∈ M }. These are called the inner derivations from A to M. So HH 1(A, M ) = Der(A, M ) InnDer(A, M ). Setting A = M, we get the derivations and inner derivations of A. Example. If A = k[x], and char k = 0, then DerA = p(X) d dx : p(x) ∈ k[X], and there are no (non-trivial) inner derivations because A is commutative. So we find HH 1(k[X], k[X]) ∼= k[X]. In general, Der(A) form a Lie algebra, since D1D2 − D2D1 ∈ Endk(A) is in fact a derivation if D1 and D2 are. There is another way we can think about derivations, which is via semi-direct products. Definition (Semi-direct product). Let M be an A-A-bimodule. We can form the semi-direct product of A and M, written A M, which is an algebra with elements (a, m) ∈ A × M, and multiplication given by (a1, m1) · (a2, m2) = (a1a2, a1m2 + m1a2). Addition is given by the obvious one. Alternatively, we can write A M ∼= A + M ε, where ε commutes with everything and ε2 = 0. Then M ε forms an ideal with (M ε)2 = 0. In particular, we can look at A A ∼= A + Aε. This is often written as A[ε].
|
Previously, we saw first cohomology can be understood in terms of derivations. We can formulate derivations in terms of this semi-direct product. Lemma. We have Derk(A, M ) ∼= {algebra complements to M in A M isomorphic to A}. 55 3 Hochschild homology and cohomology III Algebras Proof. A complement to M is an embedded copy of A in A M, A a A M (a, Da) The function A → M given by a → Da is a derivation, since under the embedding, we have ab → (ab, aDb + Dab). Conversely, a derivation f : A → M gives an embedding of A in A M given by a → (a, f (a)). We can further rewrite this characterization in terms of automorphisms of the semi-direct product. This allows us to identify inner derivations as well. Lemma. We have Der(A, M ) ∼= automorphisms of A M of the form a → a + f (a)ε, mε → mε, where we view A M ∼= A + M ε. Moreover, the inner derivations correspond to automorphisms achieved by conjugation by 1 + mε, which is a unit with inverse 1 − mε. The proof is a direct check. This applies in particular when we pick M = AAA, and the Lie algebra of derivations of A may be thought of as the set of infinitesimal automorphisms. Let’s now consider HH 2(A, M ). This is to be understood in terms of exten- sions, of which the “trivial” example is the semi-direct product. Definition (Extension). Let A be an algebra and M and A-A-bimodule. An extension of A by M. An extension of A by M is a k-algebra B containing a 2-sided ideal I such that – I 2 = 0; – B/I ∼= A; and – M ∼= I as an A-A-bimodule. Note that since I 2 = 0, the left and right multiplication in B induces an A-A-bimodule structure on I, rather than just a B-B-bimodule. We let π : B → A
|
be the canonical quotient map. Then we have a short exact sequence 0 I B A 0. Then two extensions B1 and B2 are isomorphic if there is a k-algebra isomorphism θ : B1 → B2 such that the following diagram commutes: 0 I B1 θ B2 A. 0 Note that the semi-direct product is such an extension, called the split extension. 56 3 Hochschild homology and cohomology III Algebras Proposition. There is a bijection between HH 2(A, M ) with the isomorphism classes of extensions of A by M. This is something that should be familiar if we know about group cohomology. Proof. Let B be an extension with, as usual, π : B → A, I = M = ker π, I 2 = 0. We now try to produce a cocycle from this. Let ρ be any k-linear map A → B such that π(ρ(a)) = a. This is possible since π is surjective. Equivalently, ρ(π(b)) = b mod I. We define a k-linear map fρ : A ⊗ A → I ∼= M by Note that the image lies in I since a1 ⊗ a2 → ρ(a1)ρ(a2) − ρ(a1a2). ρ(a1)ρ(a2) ≡ ρ(a1a2) (mod I). It is a routine check that fρ is a 2-cocycle, i.e. it lies in ker δ2. If we replace ρ by any other ρ, we get fρ, and we have fρ(a1 ⊗ a2) − fρ(a1 ⊗ a2) = ρ(a1)(ρ(a2) − ρ(a2)) − (ρ(a1a2) − ρ(a1a2)) + (ρ(a1) − ρ(a1))ρ(a2) = a1 · (ρ(a2) − ρ(a2)) − (ρ(a1a2) − ρ(a1a2)) + (ρ(a1) − ρ(a1)) · a2, where · denotes the A-A
|
-bimodule action in I. Thus, we find fρ − fρ = δ1(ρ − ρ), noting that ρ − ρ actually maps to I. So we obtain a map from the isomorphism classes of extensions to the second cohomology group. Conversely, given an A-A-bimodule M and a 2-cocycle f : A ⊗ A → M, we let as k-vector spaces. We define the multiplication map Bf = A ⊕ M (a1, m1)(a2, m2) = (a1a2, a1m2 + m1a2 + f (a1 ⊗ a2)). This is associative precisely because of the 2-cocycle condition. The map (a, m) → a yields a homomorphism π : B → A, with kernel I being a two-sided ideal of B which has I 2 = 0. Moreover, I ∼= M as an A-A-bimodule. Taking ρ : A → B by ρ(a) = (a, 0) yields the 2-cocycle we started with. Finally, let f be another 2 co-cycle cohomologous to f. Then there is a linear map τ : A → M with f − f = δ1τ. That is, f (a1 ⊗ A2) = f (a1 ⊗ a2) + a1 · τ (a2) − τ (a1a2) + τ (a1) · a2. Then consider the map Bf → B f given by (a, m) → (a, m + τ (a)). One then checks this is an isomorphism of extensions. And then we are done. 57 3 Hochschild homology and cohomology III Algebras In the proof, we see 0 corresponds to the semi-direct product. Corollary. If HH 2(A, M ) = 0, then all extensions are split. We now prove some theorems that appear to be trivialities. However, they are trivial only because we now have the machinery of Hochschild cohomology. When they were first proved, such machinery did not exist, and they were written in a form that seemed less trivial. Theorem (Wed
|
derburn, Malcev). Let B be a k-algebra satisfying – Dim(B/J(B)) ≤ 1. – J(B)2 = 0 Then there is an subalgebra A ∼= B/J(B) of B such that B = A J(B). Furthermore, if Dim(B/J(B)) = 0, then any two such subalgebras A, A are conjugate, i.e. there is some x ∈ J(B) such that Notice that 1 + x is a unit in B. A = (1 + x)A(1 + x)−1. In fact, the same result holds if we only require J(B) to be nilpotent. This follows from an induction argument using this as a base case, which is messy and not really interesting. Proof. We have J(B)2 = 0. Since we know Dim(B/J(B)) ≤ 1, we must have where HH 2(A, J(B)) = 0 A ∼= B J(B). Note that we regard J(B) as an A-A-bimodule here. So we know that all extension of A by J(B) are semi-direct, as required. Furthermore, if Dim(B/J(B)) = 0, then we know HH 1(A, J(A)) = 0. So by our older lemmas, we see that complements are all conjugate, as required. Corollary. If k is algebraically closed and dimk B < ∞, then there is a subalgebra A of B such that A ∼= B J(B), and B = A J(B). Moreover, A is unique up to conjugation by units of the form 1+x with x ∈ J(B). Proof. We need to show that Dim(A) = 0. But we know B/J(B) is a semisimple k-algebra of finite dimension, and in particular is Artinian. So by Artin–Wedderburn, we know B/J(B) is a direct sum of matrix algebras over k (since k is algebraically closed and dimk(B/J(B))). We have previously observed that Mn(k) is k-separable. Since k-separability behaves well
|
under direct sums, we know B/J(B) is k-separable, hence has dimension zero. It is a general fact that J(B) is nilpotent. 58 3 Hochschild homology and cohomology III Algebras 3.3 Star products We are now going to do study some deformation theory. Suppose A is a k-algebra. We write V for the underlying vector space of A. Then there is a natural algebra structure on V ⊗k k[[t]] = V [[t]], which we may write as A[[t]]. However, we might we to consider more interesting algebra structures on this vector space. Of course, we don’t want to completely forget the algebra structure on A. So we make the following definition: Definition (Star product). Let A be a k-algebra, and let V be the underlying vector space. A star product is an associative k[[t]]-bilinear product on V [[t]] that reduces to the multiplication on A when we set t = 0. Can we produce non-trivial star products? It seems difficult, because when we write down an attempt, we need to make sure it is in fact associative, and that might take quite a bit of work. One example we have already seen is the following: Example. Given a filtered k-algebra A, we formed the Rees algebra associated with the filtration, and it embeds as a vector space in (gr A)[[t]]. Thus we get a product on (gr A)[[t]]. There are two cases where we are most interested in — when A = An(k) or A = U(g). We saw that gr A was actually a (commutative) polynomial algebra. However, the product on the Rees algebra is non-commutative. So the ∗-product will be non-commutative. In general, the availability of star products is largely controlled by the Hochschild cohomology of A. To understand this, let’s see what we actually need to specify to get a star product. Since we required the product to be a k[[t]]-bilinear map f : V [[t]] × V [[t]] → V [[t]], all we need to do is to specify
|
what elements of V = A are sent to. Let a, b ∈ V = A. We write f (a, b) = ab + tF1(a, b) + t2F2(a, b) + · · ·. Because of bilinearity, we know Fi are k-bilinear maps, and so correspond to k-linear maps V ⊗ V → V. For convenience, we will write F0(a, b) = ab. The only non-trivial requirement f has to satisfy is associativity: f (f (a, b), c) = f (a, f (b, c)). What condition does this force on our Fi? By looking at coefficients of t, this implies that for all λ = 0, 1, 2, · · ·, we have Fm(Fn(a, b), c) − Fm(a, Fn(b, c)) = 0. (∗) m+n=λ m,n≥0 For λ = 0, we are just getting the associativity of the original multiplication on A. When λ = 1, then this says aF1(b, c) − F1(ab, c) + F1(a, bc) − F1(a, b)c = 0. 59 3 Hochschild homology and cohomology III Algebras All this says is that F1 is a 2-cocycle! This is not surprising. Indeed, we’ve just seen (a while ago) that working mod t2, the extensions of A by AAA are governed by HH 2. Thus, we will refer to 2-cocycles as infinitesimal deformations in this context. Note that given an arbitrary 2 co-cycle A ⊗ A → A, it may not be possible to produce a star product with the given 2-cocycle as F1. Definition (Integrable 2-cocycle). Let f : A ⊗ A → A be a 2-cocycle. Then it is integrable if it is the F1 of a star product. We would like to know when a 2-cocycle is integrable. Let’s rewrite (∗) as (†λ): m+n=λ m,n>
|
0 Fm(Fn(a, b), c) − Fm(a, Fn(b, c)) = (δ2Fλ)(a, b, c). (†λ) Here we are identifying Fλ with the corresponding k-linear map A ⊗ A → A. For λ = 2, this says F1(F1(ab), c) − F1(a, F1(b, c)) = (δ2F2)(a, b, c). If F1 is a 2-cocycle, then one can check that the LHS gives a 3-cocycle. If F1 is integrable, then the LHS has to be equal to the RHS, and so must be a coboundary, and thus has cohomology class zero in HH 3(A, A). In fact, if F1, · · ·, Fλ−1 satisfy (†1), · · ·, (†λ−1), then the LHS of (†λ) is also a 3-cocycle. If it is a coboundary, and we have defined F1, · · ·, Fλ1, then we can define Fλ such that (†λ) holds. However, if it is not a coboundary, then we get stuck, and we see that our choice of F1, · · ·, Fλ−1 does not lead to a ∗-product. The 3-cocycle appearing on the LHS of (†λ) is an obstruction to integrability. If, however, they are always coboundaries, then we can inductively define F1, F2, · · · to give a ∗-product. Thus we have proved Theorem (Gerstenhaber). If HH 3(A, A) = 0, then all infinitesimal deformations are integrable. Of course, even if HH 3(A, A) = 0, we can still get ∗-products, but we need to pick our F1 more carefully. Now after producing star products, we want to know if they are equivalent. Definition (Equivalence of star proeducts). Two star products f and g are equivalent on V ⊗ k[[t]] if there is a k[[t
|
]]-linear automorphism Φ of V [[t]] of the form Φ(a) = a + tφ1(a) + t2φ2(a) + · · · sch that Equivalently, the following diagram has to commute: f (a, b) = Φ−1g(Φ(a), Φ(b)). V [[t]] ⊗ V [[t]] Φ⊗Φ V [[t]] ⊗ V [[t]] f g V [[t]] Φ V [[t]] Star products equivalent to the usual product on A ⊗ k[[t]] are called trivial. 60 3 Hochschild homology and cohomology III Algebras Theorem (Gerstenhaber). Any non-trivial star product f is equivalent to one of the form g(a, b) = ab + tnGn(a, b) + tn+1Gn+1(a, b) + · · ·, where Gn is a 2-cocycle and not a coboundary. In particular, if HH 2(A, A) = 0, then any star product is trivial. Proof. Suppose as usual f (a, b) = ab + tF1(a, b) + t2F2(a, b) + · · ·, and suppose F1, · · ·, Fn−1 = 0. Then it follows from (†) that If Fn is a coboundary, then we can write δ2Fn = 0. Fn = −δφn for some φn : A → A. We set Then we can compute that Φn(a) = a + tnφn(a). Φ−1 n (f (Φn(a), Φn(b))) is of the form ab + tn+1Gn+1(a, b) + · · ·. So we have managed to get rid of a further term, and we can keep going until we get the first non-zero term not a coboundary. Suppose this never stops. Then f is trivial — we are using that · · · ◦ Φn+2 ◦ Φn+1 ◦ Φn converges in the automorphism ring, since we are adding terms of higher and higher
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.