Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Natural isomorphism of VxV* and End(V)

  1. Apr 11, 2005 #1
    I'm looking for help constructing the natural isomorphism between [itex]V\otimes V^*[/itex] and [itex]\operatorname{End}(V)[/itex], with V a vector space.

    So far, I think I should have functors F and G which take [itex]V \mapsto V\otimes V^*[/itex] and [itex]V \mapsto \operatorname{End}(V)[/itex]. I'm having a little trouble figuring out how the functors should act on morphisms though. For example, the only sensible thing that I can get F(f) to be is the morphism [itex]v\otimes \sigma \mapsto f(v)\otimes (f^{-1})^*\sigma[/itex]. Only, here I have to assume that f is invertible, which I don't want. The functor should be defined for all morphisms, right?

    thanks
    -Don
     
  2. jcsd
  3. Apr 12, 2005 #2

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    Firstly this is only true for finite dimensional vector spaces.

    You do not need a functor - you need an isomorphism.

    Try sending the element [tex]v\otimes \theta[tex] to an endomorphism of V, such as g, where g(w) = \theta(w)v. now try some dimension counting.


    There is a functor approeach, but I dont' think you should go down that road (tensor is adjoint with hom plus Yoneda).
     
  4. Apr 12, 2005 #3
    right.

    OK, but I want to show that this is a natural isomorphism of functors.

    yep.

    what's wrong with the functor approach? Why shouldn't I go down that road?
     
  5. Apr 12, 2005 #4

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    because it is unnecessarily complicated
     
    Last edited: Apr 12, 2005
  6. Apr 12, 2005 #5
    to matt: thanks for your help

    anyone else here willing to help me with my functors?
     
  7. Apr 12, 2005 #6

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    well, the other thing about the functor approach is that to show that there is a natural transformation one probably ought to start by showing that there is an isomorphism between the objects V\otimes V^* and End(V).

    To start on this approach without bothering to write down an obvious isomorphism, I would start by attempting to create a representable functor, and apply Yoneda to it.

    The reason why I don't think this is a good idea is that duality and hom are only adjoint on one side for categories other than simple ones such as finite dimensional vector spaces (that is if duality even exits), so a purely functor theoretic approach may not work out very easily, and at some point you will implicitly use the isomorphism constructed above.
     
  8. Apr 12, 2005 #7

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    whetehr you want a functorial isomorihsm or not, matt's advice is still correct. i.e. he is telling you how to write down an isomorphism between any two VALUES of the two functors. But that is exactly what an isomorphism of fucntors is:

    i.e. an isomoprphism of functors F,G means
    1)that for every object V you give an isomorphism F(V)-->G(V).
    2) THEN you show that these isomolrphisms are natural with respect to changing the space V, i.e. that for every map V-->W, you have corresponding maps F(V)-->F(W), and G(V)-->G(W), and you show these commute with the maps F(V)-->G(V), and F(W)-->G(W) from part 1.

    So in any case do what matt said first, i.e. find a natural looking map
    VtensV*-->Hom(V,V), i.e. a bilinear map VxV*-->Hom(V,V).

    later worry about the functorial aspect of it. oh by the way, as matt's adjointness remark implies, VtensV* is not really a functor of V, i.e. it is neither covariant nor contravariant. So you are going to have trouble with this problem.

    I.e. the correct statement is probably that V*tensW admits a natural map to Hom(V,W), which is an isomorphism of functors of two variables, when the spaces are finite dimensional. Buit these functors of two variables are of opposite variance in each variable. of course there are artificial ways to hide that by working in the "opposite" category but they confuse me.
     
    Last edited: Apr 12, 2005
  9. Apr 12, 2005 #8

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    here is an other fancy schmancy approach which matt is hinting at: the [linear] functor V*tens(W) = F(W) of W is characterized by two properties:
    1) it is right exact.
    2) it commutes with direct sums.
    3) F(R) = V*

    So if Hom(V,.) also has these properties, then it is isomorphic to the functor V*tens(.).
    [This is an old theorem of Sammy Eilenberg and [independently] Charles Watts I believe, and whose proof is trivial, at least if you like this kind of thing.]

    E.g. to show Hom(V,.) is right exact, let t:A-->B be a surjection. then we have to show that the map Hom(V,A)-->Hom(V,B) taking f to (tof) is also a surjection. But since V is a vector space, we can always lift any map out of it through a surjection.

    The problem comes in asking whether Hom(V,.) commutes with direct sums, since it does not in general, i.e. it commutes with direct products. That is why these functors actually are not the same in general.

    Of course finite direct sums and finite direct products are the same thing, so maybe (?) this shows these functors agree on finite dimensional vector spaces, where only finite direct sums are allowed.

    Finally, of course F(R) = Hom(V,R) = V*, so the last property is true.

    Lets try this investigation a little differently: Lets ask whether the functor Hom(V*,W) = G(V), is isomorphic to F(V) = VtensW. I.e. prop. 1) asks if G commutes with direct sums. Again we have problems because the functor (.)* changes direct sums into direct products, although again they are ok for finite sums. is that ok??

    Then we ask if G is right exact, i.e. given a surjection t:A-->B, and corresponding map

    t*:B*-->A*, is the corresponding map Hom(A*,W)-->Hom(B*,W) taking f to (fot*) a surjection? Well t* is injective, so the point is whether every map out of the subspace B* extends to map out of the bigger space A*, which is ok in finite dimensions.

    Finally we ask if G(R) is W? But G(R) = Hom(R*,W) is certainly W since R* = R.

    So for me the very abstract functorial approach is iffy because of the unnaturak restriction to the category of finite dimensional spaces, to which my store of general theorems on the category of modules would need modification.


    I will say I enjoyed getting a less basic, hence more interesting, question. This seems to be sort of a first year grad algebra question, or upper undergrad, at a strong school, right? Anyway, thanks for the memories.
     
    Last edited: Apr 12, 2005
  10. Apr 12, 2005 #9
    yes. And that's the isomorphism matt gave me. I actually already knew the mapping.

    which is where I got stuck, since I could not construct the morphisms from F(V) to F(W) (nor those from G(V) to G(W)).

    Right. [itex]u \mapsto \sigma(u)v[/itex] is the mapping. As matt correctly stated.

    Maybe this addresses my concern. Are these things even functors? Will we be able to satisfy F(fg) = F(f)F(g) if F is somehow "part covariant" and "part contravariant"? That's the trouble I was running into, trying to construct the functors.

    Hmm.. maybe that's what I need to do.
     
  11. Apr 12, 2005 #10
    I'm not familiar with the theorem.


    You know, if you dislike the restriction to finite dimensional spaces, I would be happy to just show a natural injection between functors, instead of a natural isomorphism. Which I think will exist in the arbitrary dimensional case, right?


    I'm glad to hear you say it's not basis, that way I don't feel bad for not getting it right. I thought it was going to be a simple exercise in natural isomorphisms, but I think I was wrong.
    yep.
     
  12. Apr 12, 2005 #11

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It might help if you just take your head out of category land and put it back into linear algebra land -- sometimes just trying to refocus your brain helps immensely.

    Your question seems to be that you need to find a way to relate End(V), End(W), and a linear map T:V-->W. I guess we know several ways to do this if T is invertible: for example, for S:V-->V, we have [itex]T S T^{-1}:W \rightarrow R[/itex]. Or, for R:W-->W we have [itex]T^{-1} R T:V \rightarrow V[/itex]...

    T doesn't necessarily have to be an invertible map, though -- for example, an mxn matrix M (m < n) with full rank (i.e. m) will have a right inverse N such that MN = I. In fact, I think you can even go so far as to find a particular N such that MN = I and that NM is a diagonal matrix whose entries are m 1's and (n-m) 0's. N might even be uniquely determined by that criterion. Maybe you can replicate one of the above with these?
     
    Last edited: Apr 12, 2005
  13. Apr 12, 2005 #12

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    I did not say it was not a "simple exercise" in natural isomorphisms, but simple is a relative term, and the concept of "natural isomorphisms" themselves, i.e. functors, are not as basic as most of what we see here. Actually probably no exercise in functorial isomorphism is simple, or at least not brief, but they all do follow the same pattern.

    Well, actually I see that your problem is with the one aspect of the problem that is genuinely confusing, i.e. the fact that it is not true, since the gadget is neither covariant nor contravariant in both variables. but i will indulge myself with some explanation anyway. please forgive me.

    The fundamental rule to show things are isomorphic is to write down the most obvious map you can. then see if it is an isomorphism. after that the naturality part is usually automatic, if confusing.

    As my superb teacher put it: "Write down the only map you can think of. If that isn't it, it takes a genius to come up with one. Then check that when you change spaces you get maps, and everything commutes. [thats naturality]. To show it is an isomorphism, try to write down a map in the other direction and do it all again, and show the two maps are inverses."

    so here, we write down the obvious bilinear map V*xW-->Hom(V,W) taking (f,y) to the map taking x to f(x)y. This is bilinear hence induces a map V*tensW-->Hom(V,W). so we wrote down the most obvious maps.

    Now to show it is an ismorphism is impossible unless we use some properties of finite dimensional spaces, and the most powerful one is the existence of bases, and the theory of dimension. I.e. there is not going to be a really natural map in the other direction. But for instance, both those spaces have the same dimension, so this map is an isomorphism if and only if it is surjective, or injective.

    a basis for the target space is given by the maps g(i,j) taking xi to yj and all other xk to zero, where {xi} and {yj} are bases of V,W resp. (These are Hurkyl's matrices, having a 1 in only one entry.)

    So try to show some element maps to this map g(i,j). Well, these things are always either trivial or impossible as my teacher said, so lets just close out eyes and write down the simplest possible element of V*tensW we can think of that involves xi and yj. hey, how about xi*(tens)yj???? where xi* is the map taking xi to 1, and all other xk to zero.

    Then the image map is the one taking w to xi*(w).yj, in particular taking xi to yj, and every other xk to 0.yj = 0. seems good. so the map is onto hence isom. now we have checked it is an isomorphism in finite dimensions.

    i realize you had no trouble with any of this.

    now for the naturality part. this part has nothing to do with the finite dimensionality. this part is the "simple exercise" in natural maps.

    so what happens if we "change spaces"? i.e. take any map f:V-->V'. then we get a natural map from V'*-->V*, and hence V'*(tens)W -->V*(tens)W, and a natural map Hom(V',W)-->Hom(V,W). Is that ok so far?

    so we have checked that "when you change spaces, you get maps."

    then these compose somehow or other. i.e. I guess we can compose

    V'*(tens)W -->V*(tens)W-->Hom(V,W), and also

    V'*(tens)W -->Hom(V',W)-->Hom(V,W).

    the simple exercise is to check these two compositions are equal. i.e. we check that when you change spaces and get maps, that "everything commutes."

    let me try one not to be overly cavalier: say we start from the element

    g(tens)w in V'*(tens)W, which goes to (gof)(tens)w in V*(tens)W.

    Then it goes to the map taking v in V to g(f(v)).w.

    Now in the other composition, we send g(tens)w to the map taking v' in V' to g(v').w, then to the map taking v in V to the image of f(v), i.e. sending v to

    g(f(v)).w. this is the same result!! (It always is in my experience. I.e. in this subject either it always checks out trivially like this, or you get stuck somewhere. It seldom comes out wrong.)


    Now I suspect you had no trouble with this either and that your only question was the very valid one: "in what sense are these two guys functors of V?" and I say they are in fact not.

    I.e. I say just do the exercise as a natural transformation of a pair of functors of two variables, with opposite variance in the two variables.

    Of course that means you have to do the space - changing thing again by changing W instead of V.

    And then if you look at it maybe, just maybe, you can interpret it somehow as a process of changing both spaces at once, but i doubt it in any meaningful sense.

    Forgive me Hurkyl, i'm tired of matrices at the moment. As the great Emil Artin once wrote [roughly, in his book, Geometric Algebra]: "linear algebra should always be done insofar as possible without mentioning matrices. Proofs with matrices generally take twice as much space as those obtained by throwing the matrices out. Of course sometimes they cannot be dispensed with, e.g. one may need to compute a determinant."

    Of course you weren't really using matrices, just the word "matrix", but any construction that requires the maps to be invertible is seldom "natural" in the sense of categories.
     
    Last edited: Apr 12, 2005
  14. Apr 12, 2005 #13

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It just seemed more convenient to say that NM was a matrix whose diagonal entries are 1 and 0 than to say something like that NM was the identity on a subspace and the zero transformation on a complementary subspace... or that your basis elements were eigenvectors with eigenvalues 1 and 0. :smile:

    Now that I think of it, though, I could just say NM is a projection. That would do it. Ah well.
     
  15. Apr 12, 2005 #14
    OK. I see my mistake now. I think it will be easy to show naturality in the two arguments separately. Thank you.


    I guess I would like to know if there is a way we can treat the two arguments in one go, maybe I'll poke around in my textbook a bit some time, but for now, I'm happy with the answer you've given. Thanks again.
     
  16. Apr 12, 2005 #15

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    what textbook are you using, if i may ask?

    By the way here is another cute little approach to characterizing the functors Hom(.,W), taken from the class notes I handed out in my my graduate algebra class. [Notes to the guys in the "academic advice to students" thread: here is another little gem that the student who skips class would never have learned to this day, in all likelihood.]

    Theorem: If F is any linear functor which turns cokernels into kernels, and sums into products, and such that F(R) = W, then F is isomorphic as a functor to Hom(.,W).

    [the proof in my lecture notes, without checking all details is only a few lines long.]

    In the case of F(V) = V*(tens)W, we have a linear functor such that F(R) = W, and again for finite dimensional V and finite direct sums, the other two proeprties hold as well, so presumably the proof shows that in this case F(V) = Hom(.,W) for finite dimensional V.


    By the way I could be motivated at some point to put this 60 page section of my notes, entitled "Hom, Duality and representable functors, tensor products and alternating products", on my website.

    As to treating both arguments "at one go", you cannot, no matter how you stack it. You can pretend to, in the opposite category, by defining a map from V-->W to be a map from W to V, but that is just changing the names.

    As my calc professor put it, "when faced with two things to check, check one, then the other - do not be like the ass between two bails of hay."
     
    Last edited: Apr 12, 2005
  17. Apr 13, 2005 #16
    We're using Jacobson Basic Algebra I, but most of the category theory is just lecture notes. I also have the second volume, which has a chapter on category theory.

    I'm an ass!?
     
  18. Apr 13, 2005 #17

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    Thats the way I felt too, when he said that. my apologies. In the old days, professors were more blunt than is considered politically correct today.

    Some of my questions e.g. were greeted with "But that's the stupid way of looking at it!". How often has that been heard in a classroom in the last 20 years, no matter what the poser?

    Jacobson is a great expert and his book is superb, but the category theoretic statement you were asked to prove seems actually wrong, i.e. End(V) is apparently not really a functor of V, so the notes you are using may be a bit flawed.


    If you wish another source, (independently written hence presumably with different mistakes) I have just finished scanning my 1996 class notes on Hom, Duality, and Representable Functors, and can send them as a large pdf file if desired, to an email address, if one is provided.

    (In case I forget later, on page 41 of my notes, Z^n(tens)T should have been Z^n(plus)T)
     
    Last edited: Apr 13, 2005
  19. Apr 13, 2005 #18

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    As an exercise, see if you can prove this:

    Theorem: If F is any linear functor which turns cokernels into kernels, and sums into products, and such that F(R) = R, then F is isomorphic as a functor to Hom(.,R), (the dual functor taking V to V^*).
     
  20. Apr 14, 2005 #19
    Oh, the mistake is all mine, not Jacobson's.


    um sure, I'd like to take a look. I'll PM you my email address
     
  21. Apr 14, 2005 #20
    I'm working on this problem. I have to bone up on some of the concepts first, but it looks like a nice problem. Give me some time?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?