MA4JB Commutative Algebra II
Second half of the course, Weeks 6-10.
Immediate aims: Basic ideas of homological algebra: projective modules
and projective resolutions, the easy form of the Hilbert syzygies
theorem. Regular sequences, depth and the Koszul complex. The Ext
functors. Injective modules and injective resolutions. There are two
definitions of Ext: as a contravariant functor in the first argument,
via projective resolutions. In a complementary directions as a covariant
functor in the second argument via injective resolutions.
Longer-term aims (we may not get through all this):
Serre's characterisation of normal by R1+S2
The elementary theory of regular local rings is in [A\&M, Chap 11] More
general Hilbert syzygies theorem over regular local rings, and the
Auslander-Buchsbaum refinement.
Characterisation of regular local rings by finite projective dimension.
Macaulay' unmixed result.
Cohen-Macaulay and Gorenstein rings, their definition, characterisation
and main properties. Towards duality.
===== Overview of homological algebra =====
Everything in the rest of the course involves homological algebra in
some form or another: Complexes, exact sequences, what to do when the
operations we want to do break exactness.
== Colloquial overview of Abelian categories ==
Let M,N be modules over a ring, and f: M -> N a homomorphism. We know
sub-objet, quotient object ker, image, cokernel, sometimes coimage.
ker f in M, the quotient M/(ker f) "coimage"
im f in N, the cokernel N/im f
the map f: M -> N can be broken down into
0 -> ker f -> M -> M/(ker f) -> 0,
an isomorphism M/(ker f) iso im f,
0 -> im f -> N -> coker f -> 0. (*)
(This is mostly familiar, except that you may not have seen
"coimage" = M/(ker f) before -- an artificial construction.)
This is a bit like the rank-nullity result of first year linear algebra:
write down some linear equations. How many solutions we get depends on
whether the equations are linearly independent, and so on.
There is a level of abstraction even before that: the set Hom(M,N)
between objects is an Abelian group under addition (or an R-module), and
the direct sum M + N is a module with
maps in i1: M -> M+N given by m |-> (m,0) and similarly i2 for N
maps out p1: M+N -> M given by (m,n) |-> m and similarly p2 for N
together with identifications p1.i1 = id M, p2.i1 = 0, and a few more.
Whenever we use direct sum with these properties, we are in an _additive
category_: the Hom sets are Abelian groups of R-modules, categorical
product and coproducts M+N as above are defined and coincide (or the
same with two objects M,N replaced by finitely many objects).
An _Abelian category_ is an additive category with ker and im, coimage
and coker having the properties (*). Whenever you say that a complex of
modules is an exact sequence, you are working in an Abelian category,
whether you know what that is or not. "An Abelian category is a category
satisfying just enough axioms so that the snake lemma holds."
For our purposes, there is no need to pay special attention to these
issues, because we only work with modules over a ring. The categorical
stuff consists of tautologies that we use all the time. (Under
appropriate set-theoretic assumptions) it is a theorem that every
Abelian category is equivalent to a category of modules over a ring.
[I currently work only with modules, and not in abstract category
theory. There are more general abstract categories, where morphisms are
not viewed as maps of sets, and all the definitions, starting from 0 and
what it means for a morphism to be the inclusion of a suboject, or to be
an epimorphism, need rethinking from the ground up.]
===== Entry point to homological algebra: the Hom functor ====
The Hom functor Hom_A(-,N) is a contravariant functor in its first
entry. It is _left exact_. Its failure to be right exact corresponds to
_extensions_, that are controlled by a new functor Ext^1.
Category of modules over a ring R. The functor Hom_R(-,N) takes
an object M |-> the R-module Hom_R(M,N)
(consisting of R-homomorphisms M -> N), and takes
a homomorphism M1 -al-> M2 |-> the R-homomorphism (alpha)
al^*: Hom(M2,N) -> Hom(M1,N),
that consists of composing with al. That is, compose f: M2 -> N with the
given al, to get f.al: M1 -> M2 -> N. Functor means compatibility with
compositions: al^*.be^* = (be.al)^* and with identity morphisms
al^*.id^* = al^*.
Hom(-,N) is a minor generalisation of the dual of a vector space over a
field k, where Hom(-,k) takes V to its dual V^dual and a k-linear map
U -M-> V to its ajoint or transpose Mt: V^dual -> U^dual.
Lemma Hom(-,N) is left-exact. That is, if we apply Hom(-,N) to a s.e.s.
0 -> A -al-> B -be-> C -> 0
we get an exact sequence
0 -> Hom(C,N) -be^*-> Hom(B,N) -al^*-> Hom(A,N). (2)
Solemn proof. For f in Hom(C,N) if the composite f.be is zero then f is
zero. Because take c in C, lift it to b in B, then f(b) = f.be(c) = 0.
The argument is trivial, given that B ->> C is surjective.
Next, exactness at the middle: the composite al^*.be^* = (be.al)^* = 0
so the sequence (2) is a complex. To say that g: B -> N is in the kernel
of al^* means that g(b) is well-defined on the coset of b modulo the
image of al. This means that if we lift c in C to an element b in B,
then apply g to b, we get g(b) in N that does not depend on the choice
of lift. This gives a well-defined morphism gbar: C -> N
c |-> (choice of b) |-> g(c) := g(b)
with be^*(gbar) = g, which proves exactness at the middle.
This was all long-winded and trivial. The key point however: there is no
reason why (2) must be exact at the right end: why should an R-module
homomorphism A -> N extend to B -> N? This fails in familiar cases:
(1) Consider
0 -> A -al-> B -> C -> 0 with A = ZZ, B = ZZ and C = ZZ/p
where the first map is multiplication by p. Set N = ZZ/p and consider
the functor Hom(-, N). There is a perfectly nice map p: A -> N that
sends a |-> a mod p. This cannot be of the form g.al^* for any g, since
this takes b to g(p*b) = p*g(b), and multiplication by p takes every
element of N to zero.
(2) In a similar vein, let R = k[x, y]_m be the localisation of k[x,y]
at the maximal ideal m = (x,y), and work in the category of R-modules.
Consider 0 -> A -al-> B -> C -> 0 with al the inclusion m in R and
C = R/m the residue field. Consider the functor Hom(-,N) where N = k.
Now a homomorphism g: B -> N = k necessarily vanishes on the submodule
A in B, because g(x.1) = x*g(1) = 0 in N and ditto for y.
On the other hand, there are plenty of nice nonzero homomorphisms
m -> k (the dual vector space (m/m^2)^dual). None of these can be
restriction of any g, so that al^* is certainly not surjective.
(3) A wider view of these examples: let I in R be an ideal f: I -> N be
a nonzero homomorphism to an I-torsion module, for example M/IM for an
R-module M. A homomorphism R -> N necessarily vanishes on I, so that it
is certainly not possible to extend the given f: I -> N to a
homomorphism F: R -> N.
===== Failure of exactness gives Ext^1 ====
Consider again a s.e.s. of R-modules
0 -> A -al-> B -be-> C -> 0.
We get the exact sequence
0 -> Hom(C,N) -> Hom(B,N) -> Hom(A,N)
Given f: A -> N, construct the _pushout_ diagram
0 -> A -> B -> C -> 0
| | |
v v v
0 -> N -> B' -> C -> 0
where B' = (B + N)/im(al, f). If the bottom row is a split s.e.s. of
R-modules (this means B' = N + C, with arrows the inclusion and
projection of the direct sum), we know how to extend f to B by including
B in B' then projecting the direct sum to its first factor.
Exercise: Please think about how to prove the converse.
In the same set-up, one can show that the class of the bottom row
0 -> N -> B' -> C -> 0
up to isomorphism of s.e.s. is determined by f in Hom(A,N) modulo the
image of al^*(Hom(B,N)). I do not press this point, except to say that
this explains the notation Ext^1(C,N): we can identify the cokernel of
be^* with extensions of C by N.
Summary of narrative so far: Categories, exact sequences. If applying a
reasonable functor break exactness, we introduce derived functors such
as Ext^1(-,N) to understand the lack of exactness and get some profit
from it.
Given a s.e.s.
0 -> A -> B -> C -> 0
and a module N, Homming into N gives
0 -> Hom(C,N) -> Hom(B,N) -> Hom(A,N) ->
-> Ext^1(C,N)
In other words, there is a new module Ext^1(C,N) that measures the
failure of right exactness. It is a kind of _derived_ Hom. This gives
the flavour of what a right derived functor is and does.
===== Projective modules ====
Definition. Let P be an R-module. P is _projective_ if for every
surjective homomorphism f: M -> N -> 0 and every homomorphism g: P -> N,
there exists a lift G: P -> M, such that f.G = g. As a diagram:
M -f-> N -> 0
^ ^
G \ | g
P given f and g, there exist G
making the triangle commute.
As well as the contravariant form discussed above, Hom_R(M,-) is a
covariant functor in its first argument, and is automatically left
exact whatever M (please do this as an easy exercise). The condition
that P is projective is equivalent to Hom(P,-) an exact functor:
if 0 -> A -> B -> C -> 0 is a s.e.s.
then Hom(P,B) ->> Hom(P,C). This just says that a homorphism
to C can be lifted via B, which is just the projective assumption.
Example-Prop. (1) If P is free then it is projective.
(2) P is projective if and only if P is a direct summand of a free
module.
(3) Over a local ring (R,m), a finite projective module P is free.
(4) Over a graded ring (graded in positive degrees), a finite graded
module that is projective as a graded module is free.
(5) A finite projective module is locally free, that is, its
localisation P_p at each prime ideal of R is free.
(1) In fact, if P has a basis e_la, take n_la = g(e_la) in N, then lift
each n_la to m_la in M with f(m_la) = n_la. We can then define G by
setting G(e_la) = m_la. This determines where G takes the basis
elements, and R-linearity gives the rest: an element sum a_la.e_la in P
maps to sum a_la.m_la. (This works because there are no R-linear
relations between the e_la, so we can map then to any elements of M we
choose. The argument is exactly the same as for vector spaces.)
(2) If P+Q (direct sum) is free, a map g: P -> N gives rise to
(g,0): P+Q -> N, that we can lift to M by (1), so P is projective.
For the converse, suppose that P is generated by {e_la}. This means that
the map f: M = sum R.f_la -> P from the free module M to P is
surjective. Now suppose P is projective, and consider the identity map
id: P -> P. Applying the definition of projective to it gives G: P -> M.
But now G.f: P -> P is the identity, whereas f.G: M -> M is idempotent
(because f.G.f.G = f.G when we cancel the middle G.f).
Thus M = im(f.G) + ker(f.G) is a direct sum decomposition of the free
module M as P+Q with P = f.G(M) and Q = ker(f.G). QED
(3) A minimal (finite) set of generators of P gives a surjective
homomorphism f: F = R^{oplus n} ->> P. The projective assumption gives a
lift g: P -> F of f, so that F = g(P) + K, with K = ker f. However, by
minimality a relation between the generators cannot have any invertible
coefficients, so the coefficients must be in m. Then mK = K, so K = 0
by Nakayama's lemma.
(4) and (5) are minor variations on the same proof.
Counterexample (projective but not free): If OK is the ring of integers
of a number field K/Q, and I a prime ideal, then by definition I is a
free OK-module if and only if it is principal. This usually fails.
However, the theory of the class group implies that I is always a
projective OK module: there is an ideal J that is the negative of I
in the class group, so that I*J = (n) is a principal ideal. One sees
that the module I+J is free, isomorphic to OK^{+2}. [Details remain
to check, I am not really an expert.]
In practice, we are mostly interested in local rings or graded rings,
so we almost always work with free modules.
===== Syzygies =====
Part of this is covered by the online Lecture 17, Part I.
Taking generators for a module M means choosing a finite free module
P_0 = sum R*e_i and a surjective homomorphism d0: P0 ->> M. To calculate
with M we need to know about the relations that hold between the
generators, that is, the kernel K0 = ker d0. To know that we have all the
relations means to ask for generators of the module K0 of all relations,
that is, choose a finite free module P1 and a map d1: P1 -> P0 with
image K0. This gives a _presentation_
P1 -d1-> P0 -d0-> M -> 0
a finite set of generators of M, and a finite set of all relations
between them with free P0 = R^b0 and P1 = R^b1. The requirements set out
for a presentation are exactly that this is an exact complex.
We can go further, and ask for the kernel of d1. That is, write down
syzygies or "relations between the relations" as a finite set of
generators of ker d1. This means that we extend the above exact complex
with P2 -d2-> P1 -d1-> .. where P2 = R^b2 and d2: P2 ->> ker d1.
Let's see why this might be useful. First of all, we are perfectly used
to the idea of a principal ideal Rx in R in an integral domain. The
group ZZ/n has presentation
0 -> ZZ -n-> ZZ -> ZZ/n, in parallel with
P1 -d1-> P0 -d0-> M -> 0
but with the advantage that the principal ideal (n) is a free module. A
striking point: A principal ideal in an integral domain is the ONLY case
in which an ideal is a free R-module!
For example, take an ideal with 2 generators, such as I = (F,G) in R the
equations of two transverse hypersurfaces. The conditions F = 0 and
G = 0 may be algebraically independent, but they are linearly dependent
as element of the ambient ring R: the reason is the trivial-looking
point that G*F - F*G = 0. (Here the second factors F and G are the
generators of I, and the first factor G* and F* are coefficients in a
R-linear dependence.) Stupid, but true!
Write P0 = R^2 for the free module of rank 2, and d0: P0 -> R the map
taking the two generators to F,G. Then the kernel of d0 certainly
includes the R-submodule of P0 containing (-G,F). We discuss below
the natural condition under which d1: R -> P0 sending 1 to (-G,F)
has image the whole of ker d0. Then
0 -> R -> R^2 -> I -> 0 is a s.e.s.,
Where the first map d1 takes 1 |-> (-G,F) and the second map takes
the two basis. This complex is the first instance of the Koszul
complex that we discuss in moe detail below.
It is also natural to write the complex left-to-right,
0 <- I <-(F,G)- R^2 <-(-G \\ F) <- R <- 0,
the two homomorphisms are 2x1 matrix (F,G) times 1x2 matrix (-G \\ F)
and the composition is just matrix multiplication.
Another simple case: the 3 coordinate lines in affine space AA^3 are
defined by the ideal I = (y*z, x*z, x*y). Its presentation can be
written
0 <- I <-d1- R^3 <-d2- R^2 <- 0
where the first map d1 is given by the 1x3 matrix M1 = (y*z, x*z, x*y)
and its kernel d2 is the 3x2 matrix of syzygies
M2 =
[ x 0 ]
[ -y -y ]
[ 0 z ]
Check that M1*M2 = 0, and check also that the 3 relations are given by
2x2 minors of the matrix M2. This reflects a somewhat curious feature of
the set-up: the matrix of first syzygies (that corresponds to the
homomorphism d2 of "relations between the relations") contains
information about the structure of I that is additional to the
information contained in the generators themselves. (In the same way
that a whole matrix contains more information than just its
determinant.)
== Aim of the next few lectures ==
This week's lectures mentioned in a simple-minded way many of the key
ideas of the rest of the course.
I discussed above the presentation of a module M by free modules
P2 -d2-> P1 -d1-> P0 -d0-> M
consisting of generators, relations and syzygies. The Hilbert Syzygies
theorem gives the important case when this presentation extends to a
finite free resolution
0 -> Pm -dm-> .. -d3-> P2 -d2-> P1 -d1-> P0 -d0-> M
(for a graded module M over a polynomial ring k[x1,..xn]). This is
a complex with each Pi a finite free module, that is an exact sequence
at each Pi, and that comes to an end after m <= n steps.
Regular sequences is the idea of passing from M to the quotient M/s1*M,
where s1 in R is an element of R that is a nonzerodivisor for M, so that
the submodule s1*M is isomorphic to M itself. We saw this kind of idea
in the treatment of the system of parameters dimension de(M) of a module
in earlier lectures, but here we want the s.o.p. to be such that at each
stage, s_{i+1} is a nonzero divisor for M/(s1,..,s_i), so that each time
we cut down the dimension, we are doing it in a clean way.
Regular sequences provide the condition for the above Koszul complex
0 <- I <-(F,G)- R^2 <-(-G \\ F) <- R <- 0,
to be exact (and the same for more generators (s1,..,sn)). This leads to
the important notion of a Cohen-Macaulay local ring (or module over it),
which is the case when we have a s.o.p. (s1,.. sn) with n = de(M) that
cuts down to a 0-dimensional (Artinian) quotient that is also a regular
sequence.
== Regular sequences ==
R a ring and M a module. An element s in R is _regular_ for M if it is
a nonzerodivisor -- that is, multiplication by M defines an injective
map M -s-> M. An important point (that we have used several times
already) is that the image s*M is a submodule of M isomorphic to it.
Next, a sequence (s1,.. sn) of elements of R is a _regular sequence_
for M if s1 is regular, and successively s_{i+1} is regular
for M/(s1*M + .. si*M).
We will see presently that a sufficient condition for the Koszul complex
0 <- I <-(F,G)- R^2 <-(-G \\ F) <- R <- 0
of (F,G) to be exact is that (F,G) is a regular sequence. (This is also
a necessary condition in the local Noetherian context.)
I want to start with the first step in the proof of the Hilbert syzygies
theorem (the historical form) using a single regular element. The
definition of regular sequence is inductive, and this result plays a
role as part of the inductive proof of Hilbert syzygies theorem.
Slogan "Regular section principle", also "Hyperplane section principle":
Assume R,M are either graded or local. Let s in R be an M-regular
element, and write Mbar = M/s*M. Then we can lift generators,
relations, syzygies, or even whole free resolutions from Mbar to M.
== Regular section principle ==
or "Hyperplane section principle"
Part of this is covered by the online Lecture 17, Part II.
Lemma I. S ring, x in S and M a finite S-module. Write
M/x*M = Mbar = N, and p: M -> N for the quotient map.
Assume either that S is graded and x is homogeneous of deg x > 0,
or that S is local with maximal ideal m and x in m.
Let u1,.. u_n in M be elements such that their images vi = p(ui)
generate N. Then u1,.. un generate M.
Proof. Take u in M and set v = p(u) for its image in N. Then the
assumption that vi generate N gives v = sum ai*vi for some ai in S,
so that u - sum ai*ui is in ker p = x*M.
In the local case x in m so this implies M = sum S*(u1,.. u_n) + m*M,
and Nakayama's lemma implies that u1,.. u_n generate M.
In the graded case, the difference u - sum ai*ui is a multiple of x,
say x*u', with u' of degree u - deg x < u. So by induction on the
degree we can assume that u' is in the module generated by u1,.. u_n,
so that this also holds for u = sum ai*ui + x*u'. Q.E.D.
Notice that Lemma I has weak assumptions. In what follows we add the
assumption that x is M-regular (so that the submodule x*M in M is
isomorphic to M itself) and that S acts faithfully on M (that is, no
element of S acts by 0).
Lemma II. As in Lemma I, let S be a ring, x in S, and M a finite
S-module. As before either S is graded and x is homogeneous of
deg x > 0, or S,m is local with x in m. Now assume that S acts
faithfully on M and that x is M-regular. Keep the above notation for the
quotient map p: M -> N = M/x*M and the generators ui in M with p(ui) =
vi in N. Now we treat N as a module over the quotient ring S/x = Sbar
and write p: S -> Sbar for the quotient ring homomorphism.
Then an Sbar-linear relation r between the vi in N can be lifted to an
S-linear relation s between the ui. And if relations rj generate the
submodule of relations in N between the vi, their lifts sj generate the
relations in M between the ui.
In more detail: Lemma I gives generators ui for M over S, and generators
vi for N over Sbar. Write the corresponding generating sequences as
free module P0 = S^{oplus n} ->> M (with ith basis element |-> ui)
free module Q0 = Sbar^{oplus n} ->> N (with ith basis element |-> vi)
and write K0 = ker(P0 -> M) and L0 = ker(Q0 -> M). Here K0 in P0 is the
submodule of all S-linear relations holding in M, and L0 in Q0 the same
for N.
The conclusion is that if L0 is generated by relations {r1,..rm} then
for each j, there exists a relation sj = sum a_ji*ui holding between the
ui in M such that rj = sum p(a_ji)*vi (in other words K0 ->> L0), and the
sj generate K0.
The proof reduces to a diagram chase on
0 -> K0 -> P0 -> M -> 0
x| x| x|
v v v
0 -> K0 -> P0 -> M -> 0
p| p| p|
v v v
0 -> L0 -> Q0 -> N -> 0
The three rows are generator-and-relation exact sequences. The top
vertical arrows are multiplication by the regular element x, so all
three are injective. The bottom three arrow are projections
p: M -> M/xM, etc.
The third column M -x-> M -> N is the definition of N = M/xM. The middle
column consist of free modules a direct sum of copies of
0 -> S -x-> S -> Sbar -> 0, with the component of P0 corresponding to ui
mapping to the component of Q0 corresponding to the generator vi.
The first column consists of the kernels of P0 -> M and Q0 -> N.
Now the top three down arrows are injective by assumption, so the snake
lemma implies that the first column is an exact sequence: that is,
K0 -> L0 is surjective (the assertion that every relation rj between the
vi in N lifts to a relation sj between the ui in M). And any element of
ker p: K0 -> L0 is in the image of x. Now use the same argument as in
Lemma I: if s is some relation in K0, then p(s) is a linear combination
of the rj, say p(s) = sum bbar_j*rj. (Here bbar_j in Sbar is the image of
bj in S.) So s-sum bj*sj is in the kernel of p, hence in the image of x.
And we conclude again by Nakayama's lemma in the graded case or by
induction on degree in the graded case.
===== APPENDIX: Injective modules =====
(This was the subject of online Lecture 18, Parts I and II, see my
MA4J8 webpage.)
The course mainly works in terms of free or projective resolutions, and
use these as practical calculations. On the other hand, many theoretical
questions require injective resolutions, and I try to give the flavour.
Definition An R-module J is injective if:
for every inclusion A into B of R-modules,
every homomorphism f: A -> J is obtained as
the restriction of a homomorphism F: B -> J.
Draw this as a commutative diagram:
0 -> A -> B
f| /F
v v
J
That is, given the top right A -al-> B and f,
there exist a homomorphism F with F.al = f, that completes
a commutative triangle.
The definition says Hom_R(B,J) -> Hom_R(A,J) is surjective,
so the left exact functor Hom(-, J) discussed above is actually exact.
It follows from the definition that an exact sequence
0 -> J -> B -> C -> 0 with J an injective module splits as B = J + C.
Injective modules and injective resolutions appear everywhere in
homological algebra, but they are not at all intuitive, and do not
usually appear in explicit practical calculations. We need to know
that they _exist_, but mostly don't expect to get any benefit from
their structure. I give here a treatment (following Weibel).
First, an indication of the potential usefulness of injective modules.
(This is an introductory narrative, not a formal treatment.) In the
above, I discussed Hom(-,N) as a contravariant functor in the first
argument.
Proposition. Let 0 -> A -> B -> C -> 0 be an exact sequence of
R-modules and suppose that N is an R-module that has an inclusion
N into I, where N is an injective R-module.
The exact sequence discussed above continues as an exact sequence
0 -> Hom(C,N) -> Hom(B,N) -> Hom(A,N) (snake)
-> Hom(C, I/N) -> Hom(B, I/N) -> Hom(A, I/N)
This gives a construction of Ext^1(C,N) = Hom(C, I/N).
Explanation and sketch proof. Along with the previous pushout B',
consider the bigger pushout B" fitting in the diagram
0 -> A -> B -> C -> 0
| | ||
v v v
0 -> N -> B' -> C -> 0
| | ||
v v v
0 -> I -> B" -> C -> 0
The three rows of the diagram are exact sequences. The first two columns
are successive maps, not complexes. The third column is just 3 copies of
C with equalities, that arise as cokernels in the pushout maps.
Suppose I try to split the sequence of the first row by lifting the
surjective map be: B ->> C. I can do that set theoretically: choose
c' |-> c for every c in C. But then asking whether the lift is R-linear
gives scope for lots of errors
(r1*c1 + r2*c2)' - r1*c1' - r2*c2' in A (*)
Changing the choices of c' gives lots of equivalences mod A, but if the
sequence is not split, we can never get rid of all the differences.
Since I is injective, the exact sequence on the bottom row is split.
Choose a splitting B" = I + C, with arrows the natural inclusion and
second projection of the direct sum.
Proposition An arbitrary choice of lifting { c' for c in C } gives rise
to a well-defined homomorphism of R-modules C -> I/N, which is the
coboundary map in (snake).
Choose a set-theoretic lifting { c' in B }. The map c |-> c' is not
R-linear because of the problem (*). Mapping them to B' does not
solve the problem, because the resulting quantities in (*) still
differ by elements of N. However, if we map them forward to B", and use
the first projection of B" to map them I, the differences belong to
N in I, so give a well-defined homomorphism C -> I/N.
===== Appendix on injective modules =====
Some details on their properties and existence
Summary: For ZZ-modules, the inverse p-torsion modules (ZZ[1/p])/ZZ are
injective, and every ZZ-module M embeds into a product of these (usually
infinite). View any ring R as a ZZ-algebra; then for an injective
ZZ-module I, the ZZ-module Hom_ZZ(R, I) becomes an R-module under
_premultiplication_, and is an injective R-module. For an R-module M,
view M as a ZZ-module and embed it into an injective ZZ-module I. An
inclusion of M into an injective R-module is then provided by the
tautological identity
Hom_ZZ(M, I) = Hom_R(M, Hom_ZZ(R, I)).
This is mostly cribbed from Charles A. Weibel, An introduction to
homological algebra, CUP 1994.
====
Definition. An R-module I is injective if the functor Hom_R(-, I) is
exact. This means that if
0 -> M1 -> M2 is exact (that is, M1 embeds in M2)
then any homomorphism e: M1 -> I extends to f: M2 -> I (this means that
f restricted to M1 equals e).
Example. A k-vector space V is an injective k-module: if V in W are
vector spaces, a k-linear map U -> V extends to W. You know this by
Year 1 linear algebra if the vector spaces are finite dimensional (but
it requires Zorn's lemma otherwise.)
====
Proposition. An R-module J is injective, provided that for every ideal
I in R and every homomorphism e: I -> J extends to a homomorphism
f: R -> J. (This needs Zorn's lemma.)
Proof. Let M in N be an inclusion of R-modules and e: M -> J. Suppose
that it has already been extended to e': M' -> J for an intermediate
submodule M in M' in N (start from M = M'). The inductive step is to
take b in N \ M' and extend e' to the bigger submodule M' + R.b using
the assumption on extending ideals. In fact, define the ideal
I = { r in R s.t. r.b in M'}.
The homomorphism I -> J given by
r |-> br |-> e'(br)
is defined on I, so extends to R as a homomorphism f: R -> J. Then
e": (M' + bR) -> J is defined as e' on M' and b -> f(1). In more detail,
e": (m+br) |-> e'(m) + f(r).
[To check well-defined: If some different m1, r1 has m+br = m1+br1 then
b(r-r1) = m-m1 in M', so r-r1 in I,
where the extension f started, so f(r-r1) = e'(m-m1).]
Now an application of Zorn's lemma proves that e extends to the
whole of N. QED
Corollary. An Abelian group (a ZZ-module) is injective iff it is
divisible. Hence the modules QQ and (ZZ[1/p])/ZZ are injective
ZZ-module, and any injective is a direct product of these. I refer to
(ZZ[1/p])/ZZ as the inverse p-torsion module, by analogy with Macaulay's
inverse monomials.
Exercise. Check that QQ/ZZ is isomorphic to the direct product of
(ZZ[1/p])/ZZ taken over all primes p. An analogous construction works
for a PID.
Lemma. For a ZZ-module M and nonzero m in M, there exists a prime p and
a homomorphism
f: M -> (ZZ[1/p])/ZZ with f(m) <> 0.
Proof. The annihilator of m in M is an ideal (n) in ZZ, so the subgroup
ZZ.m generated by m is isomorphic to ZZ/n. Choose any prime p | n and
a surjective map ZZ/n ->> ZZ/p (if n = 0 then any prime p works). Compose
with an embedding ZZ/p in (ZZ[1/p])/ZZ and extend from mZZ to the whole
of M using injectivity of the module. QED
Corollary. Consider the set of all homomorphisms from M to the
injective modules (ZZ[1/p])/ZZ. The direct sum of all these
homomorphisms in an embedding of M into an injective ZZ-module.
====
Now for modules over a general ring R (you need to worry about left and
right modules if R is noncommutative). R is a ZZ-algebra. If A is a
ZZ-module, the ZZ-module Hom_ZZ(R, A) becomes an R-module under
premultiplication. Namely, r acts on Hom_ZZ(R, A) by f |-> fr, where fr
is the map s -> f(sr) for s in R. That is, do the multiplication in the
domain R before applying the map (IMPORTANT). One checks the following
points:
1. For an R-module M and ZZ-module N the following identity holds:
Hom_ZZ(M, N) = Hom_R(M, Hom_ZZ(R, N)).
The l.-h.s. is just maps of Abelian groups. On the right, we are
map to a much bigger module, but we have to obey all the R-linearity
conditions.
2. If I is an injective ZZ-module then Hom_ZZ(R, I) is an injective
R-module.
3. It follows that for any R-module M, the product of all homomorphisms
M -> Hom_ZZ(R, (ZZ[1/p])/ZZ) is an embedding of M into an injective
R-module.
====
Finally, for algebraic geometers: let F be a sheaf of OX-modules over a
ringed space X. For P in X, the stalk F_P is an OX_P-module. Embed each
stalk F_P into an injective OX_P-module I_P. This defines an
OX-homomorphism of F into the sheaf of discontinuous sections of
DisjointUnion I_P, which is an injective sheaf.