Tensor product of modules

In summary, the author has a theorem in a book that leaves parts of the proof up to the reader. The author has attached a file with the proof. If the theorem is true then x or y must be zero, but that is not the case. The proof is easy to follow if R is commutative. However, the author needs to prove the theorem in the general non-commutative case to be sure.
  • #1
Arian.D
101
0
I'm reading about tensor product of modules, there's a theorem in the book that leaves parts of the proof to the reader. I've attached the file, I didn't put this in HW section because first of all I thought this question was more advanced to be posted in there and also because I want to discuss something else with people on this forum.

I'm thinking what happens when a simple tensor becomes zero? I mean suppose that we have [itex]x\otimes y = 0[/itex]. Does it mean that x or y must be zero? or it's possible that one of them nonzero but still their tensor product turns out to be zero?
If possible, please check my proof in the file as well.

I know I'm asking too much, but please help me as quickly as possible because tomorrow I'll have a conference about tensors, the professor is also the head of the math. department of our university and I'm the only under-graduate student in his class. so I'm very determined to have a successful conference tomorrow in the class and I need your help very much guys.
 

Attachments

  • Theorem 5.9 - Tensor product, module-theory.pdf
    829.5 KB · Views: 284
Physics news on Phys.org
  • #2
Your proof seems correct.

The answer to your other question is no. Consider the [itex]\mathbb{Z}[/itex]-module [itex]\mathbb{Z}\otimes \mathbb{Z}_2[/itex], then

[tex]2\otimes 1= 2(1\otimes 1)=1\otimes 2=1\otimes 0=0,[/tex]

but neither 2 or 1 are zero.

In general, in the R-module [itex]M\otimes N[/itex] it holds that [itex]m\otimes n=0[/itex] if and only if there exists elements [itex]m^\prime_j\in M[/itex] and [itex]a_j\in R[/itex] such that

[tex]\sum_j a_jm_j^\prime = m[/tex]

and

[tex]a_jn=0[/tex]

for all j.
 
  • #3
I see. Thanks.
Then why my proof stands correct? Because at one point I've said that y or (r+I) must be zero. Isn't that false? I don't get it.
 
  • #4
Oh right, that part is incorrect of course.

Can you prove the following equalities:

[tex]Im(\phi(j\otimes 1))=\phi(Im(j\otimes 1))=\phi(Ker(\pi\otimes 1))=Ker((\pi\otimes 1)\phi^{-1})[/tex]
 
  • #5
Actually I've proved the theorem in another way, this time it's tedious and quite unusual.
I hope you enjoy it.

Please verify if it's correct.
 

Attachments

  • Theorem 5.9 - solution.pdf
    913.2 KB · Views: 237
  • #6
Your proof that f is well-defined is very weird. It's just a chain of equalities, I can't seem to make anything about it.

I don't really get why you're making it so hard. Your original proof was very easy, except for that one gap that you should prove.
 
  • #7
Maybe I am making it too easy by assuming R is commutative, but then it is pretty easy to see that the obvious maps taking (a)tens(m)-->[am], and [m]-->(m)tens([1]),

are well defined and inverse to each other. done.
 
  • #8
micromass said:
Your proof that f is well-defined is very weird. It's just a chain of equalities, I can't seem to make anything about it.

I don't really get why you're making it so hard. Your original proof was very easy, except for that one gap that you should prove.
Okay then. Assume f is well-defined and check the rest please.
My original proof is very easy, but it's not very easy to fill that gap. Unless something comes to your mind right now, because to me it doesn't look very obvious.

mathwonk said:
Maybe I am making it too easy by assuming R is commutative, but then it is pretty easy to see that the obvious maps taking (a)tens(m)-->[am], and [m]-->(m)tens([1]),

are well defined and inverse to each other. done.

But I have to prove it in the general non-commutative case tomorrow. :(
 
  • #9
Then I suggest trying to provide all the reasons for my steps, and then see how they go in the non commutative case. probably they are exactly the same. My answer is like a cab from the airport that drops you off on the opposite side of the street from your hotel, but expects you to cross the street yourself.
 
  • #10
mathwonk said:
Then I suggest trying to provide all the reasons for my steps, and then see how they go in the non commutative case. probably they are exactly the same. My answer is like a cab from the airport that drops you off on the opposite side of the street from your hotel, but expects you to cross the street yourself.

Yup, but I think I already got a proof myself. The professor liked my proof and said it was correct. so I'm now kinda sure that my proof is correct now. But it wouldn't hurt if others check it too.
I personally like my proof so much, because it uses a lot of ideas in Algebra which are new to me and it was the first time that I've employed them in such an advanced algebraic topic. (advanced in my level I mean, not in the level of people like you and micromass). So if you validate my proof I'll be very appreciative.
 
  • #11
By the way, I got a general question.
Suppose that A,B and C are 3 R-modules. and each one of them are R-isomorphic respectively to A', B', and C'. Here's the question.
if [itex] A\to B \to C[/itex] is an exact sequence, does it mean that the sequence [itex] A' \to B' \to C'[/itex] is exact as well? I'm saying this because when two algebraic structures are isomorphic their algebraic properties will be preserved because we're just relabeling things in the two sets and keep every thing else the same. Is that right?
 
  • #12
Arian.D said:
By the way, I got a general question.
Suppose that A,B and C are 3 R-modules. and each one of them are R-isomorphic respectively to A', B', and C'. Here's the question.
if [itex] A\to B \to C[/itex] is an exact sequence, does it mean that the sequence [itex] A' \to B' \to C'[/itex] is exact as well? I'm saying this because when two algebraic structures are isomorphic their algebraic properties will be preserved because we're just relabeling things in the two sets and keep every thing else the same. Is that right?

How do you define the maps between A', B' and C' ??
 
  • #13
micromass said:
How do you define the maps between A', B' and C' ??

Ahh, right. Well, we'll define them in the most natural way, for example if in the first sequence we have the two mappings [itex]\varphi[/itex] and [itex]\psi[/itex] in the new sequence we will have the two new mappings [itex]\varphi(f_1)[/itex] and [itex]\psi(f_2)[/itex] where [itex]f_1[/itex] and [itex]f_2[/itex] are isomorphisms between A and B with A' and B' respectively. I guess now the ambiguity is clear.

By the way, how could I write a function above an arrow in Latex? I want to write down [itex]\varphi[/itex] above an arrow from A to B but I don't know how to do that :(
 
  • #14
I don't really get what you mean with [itex]\varphi(f_1)[/itex] and [itex]\psi(f_2)[/itex]. The functions [itex]\varphi[/itex] and [itex]\psi[/itex] act on modules, but you make it seem like they act on maps :confused:

Did you perhaps mean: [itex]f_2\circ \varphi\circ f_1^{-1}:A^\prime\rightarrow B^\prime[/itex] as the first map and [itex]f_3\circ \psi\circ f_2^{-1}[/itex] for the second map??
 
  • #15
Arian.D said:
By the way, how could I write a function above an arrow in Latex? I want to write down [itex]\varphi[/itex] above an arrow from A to B but I don't know how to do that :(

Code:
A\xrightarrow{f} B

[tex]A\xrightarrow{f} B[/tex]
 
  • #16
micromass said:
I don't really get what you mean with [itex]\varphi(f_1)[/itex] and [itex]\psi(f_2)[/itex]. The functions [itex]\varphi[/itex] and [itex]\psi[/itex] act on modules, but you make it seem like they act on maps :confused:

Did you perhaps mean: [itex]f_2\circ \varphi\circ f_1^{-1}:A^\prime\rightarrow B^\prime[/itex] as the first map and [itex]f_3\circ \psi\circ f_2^{-1}[/itex] for the second map??

Hmmm, let me tell you what is in my head then.
If A and A' are isomorphic, then there exists an isomorphism between them. this isomorphism takes an element of A and then sends it to an element in A' which is just the same element with a new name or sign or whatever. so in this way I could find a similar map for A' to B' as the one I've already been given between A and B. That's my idea. I don't know if I'm able to say what's in my head or not but I'll try again if I'm still unclear.

micromass said:
Code:
A\xrightarrow{f} B

[tex]A\xrightarrow{f} B[/tex]

Thanks.
 
  • #17
Arian.D said:
Hmmm, let me tell you what is in my head then.
If A and A' are isomorphic, then there exists an isomorphism between them. this isomorphism takes an element of A and then sends it to an element in A' which is just the same element with a new name or sign or whatever. so in this way I could find a similar map for A' to B' as the one I've already been given between A and B. That's my idea. I don't know if I'm able to say what's in my head or not but I'll try again if I'm still unclear.



Thanks.

I know what you mean, but if you want to prove things, then you should spell out the maps specifically. Look again at my post 14, are those the maps you mean??
 
  • #18
micromass said:
I know what you mean, but if you want to prove things, then you should spell out the maps specifically. Look again at my post 14, are those the maps you mean??

Yup. They are the same maps I have in my mind.
 
  • #19
OK, so then we wish to prove that

[tex]Ker(f_3\circ \psi \circ f_2^{-1})=Im(f_2\circ \varphi \circ f_1^{-1})[/tex]

I'll do one side of the inclusion:
Take x in the kernel, then [itex]f_3(\psi(f_2^{-1}(x)))=0[/itex]. Since [itex]f_3[/itex] is an isomorphism, its kernel is 0, thus [itex]\psi(f_2^{-1}(x))=0[/itex]. Thus [itex]f_2^{-1}(x)\in Ker(\psi)[/itex]. By exactness, we have that [itex]Ker(\psi)=Im(\varphi)[/itex], thus there exists a [itex]y\in A[/itex] such that [itex]\varphi(y)=f_2^{-1}(x)[/itex]. In other words, [itex]f_2(\varphi(y))=x[/itex]. Write [itex]z=f_1(y)[/itex], then [itex]f^{-1}_1(z)=y[/itex]. Thus
[tex]f_2(\varphi(f_1^{-1}(z)))=f_2(\varphi(y))=x[/tex]
The left-hand side is certainly in the image, thus [itex]x\in Im(f_2\circ \varphi\circ f_1^{-1})[/itex].

Can you do the other inclusion??
 
  • #20
micromass said:
OK, so then we wish to prove that

[tex]Ker(f_3\circ \psi \circ f_2^{-1})=Im(f_2\circ \varphi \circ f_1^{-1})[/tex]

I'll do one side of the inclusion:
Take x in the kernel, then [itex]f_3(\psi(f_2^{-1}(x)))=0[/itex]. Since [itex]f_3[/itex] is an isomorphism, its kernel is 0, thus [itex]\psi(f_2^{-1}(x))=0[/itex]. Thus [itex]f_2^{-1}(x)\in Ker(\psi)[/itex]. By exactness, we have that [itex]Ker(\psi)=Im(\varphi)[/itex], thus there exists a [itex]y\in A[/itex] such that [itex]\varphi(y)=f_2^{-1}(x)[/itex]. In other words, [itex]f_2(\varphi(y))=x[/itex]. Write [itex]z=f_1(y)[/itex], then [itex]f^{-1}_1(z)=y[/itex]. Thus
[tex]f_2(\varphi(f_1^{-1}(z)))=f_2(\varphi(y))=x[/tex]
The left-hand side is certainly in the image, thus [itex]x\in Im(f_2\circ \varphi\circ f_1^{-1})[/itex].

Can you do the other inclusion??

Sure. It's easy.

Take x in [itex]Im(f_2\circ \varphi \circ f_1^{-1})[/itex]. Because it's in image, then there exists y in the domain of [itex]f_2\circ \varphi \circ f_1^{-1}[/itex], i.e. in A', such that [itex]f_2\circ \varphi \circ f_1^{-1}(y)=x[/itex]
Since [itex]f_2[/itex] is an isomorphism it's invertible, and we'll have:
[itex]f_2^{-1}(x)=\varphi \circ f_1^{-1}(y)[/itex]
Now let's apply [itex]\psi[/itex] to both sides, we'll obtain:
[itex]\psi \circ f_2^{-1}(x) = \psi \circ \varphi \circ f_1^{-1}(y) = 0 [/itex]
Since [itex] \psi \circ \varphi = o [/itex], because anything that we put into [itex]\varphi[/itex] goes to the kernel of [itex]\psi[/itex] and therefore it becomes zero!
We're done! Since [itex]f_3[/itex] is a homomorphism, it sends 0 to 0, hence:
[itex] f_3 \circ \psi \circ f_2^{-1} (x) = f_3(0) = 0[/itex]
and that says [itex] x \in Ker(f_3\circ \psi \circ f_2^{-1})[/itex]

It's a very neat theorem, I think it'll make a lot of things easier in future for me, and the good thing is that it works on almost all algebraic structures that I know now, because nowhere in the proof we assumed anything else except the definition of an exact sequence, kernel and image of homomorphisms. All of them stay intact in groups, rings and modules. Thanks for your help micromass.
 
  • #21
Arian.D said:
I'm reading about tensor product of modules, there's a theorem in the book that leaves parts of the proof to the reader. I've attached the file, I didn't put this in HW section because first of all I thought this question was more advanced to be posted in there and also because I want to discuss something else with people on this forum.

I'm thinking what happens when a simple tensor becomes zero? I mean suppose that we have [itex]x\otimes y = 0[/itex]. Does it mean that x or y must be zero? or it's possible that one of them nonzero but still their tensor product turns out to be zero?
If possible, please check my proof in the file as well.

I know I'm asking too much, but please help me as quickly as possible because tomorrow I'll have a conference about tensors, the professor is also the head of the math. department of our university and I'm the only under-graduate student in his class. so I'm very determined to have a successful conference tomorrow in the class and I need your help very much guys.

In general, the issue is whether your modules being tensored have torsion or not. If they have torsion elements, then a non-zero product can be zero, and the answer is no otherwise.
 

What is a tensor product of modules?

A tensor product of modules is a mathematical operation that combines two modules to create a new module. It is a generalization of the tensor product of vector spaces, and is used to extend the concept of linear independence and linear transformations to modules.

How is a tensor product of modules calculated?

The tensor product of two modules is calculated using the tensor product construction, which involves taking the direct sum of the two modules and then factoring out certain relations to ensure that certain properties are satisfied. This process can be quite complex and may require the use of generators and relations.

What are the properties of a tensor product of modules?

The tensor product of modules has several important properties, such as being bilinear, associative, and having a universal property. It is also commutative up to isomorphism, meaning that the order of the modules being tensor multiplied does not matter. Additionally, the tensor product is distributive over direct sums and is compatible with scalar multiplication.

What is the significance of the tensor product of modules?

The tensor product of modules is a powerful tool in abstract algebra, particularly in the study of rings and modules. It allows for the extension of concepts from vector spaces to more general structures, and has numerous applications in algebraic geometry and representation theory. It also has connections to other mathematical areas such as homological algebra and category theory.

How is the tensor product of modules used in real-world applications?

The tensor product of modules has applications in various fields, including physics, engineering, and computer science. In physics, it is used to describe multilinear interactions between physical quantities. In engineering, it is used in areas such as signal processing and control theory. In computer science, it is used in the study of algorithms and data structures, particularly in the area of machine learning and neural networks.

Similar threads

  • Linear and Abstract Algebra
Replies
7
Views
239
  • Linear and Abstract Algebra
Replies
1
Views
836
  • Linear and Abstract Algebra
Replies
10
Views
351
  • Linear and Abstract Algebra
Replies
2
Views
926
  • Linear and Abstract Algebra
Replies
1
Views
823
  • Linear and Abstract Algebra
Replies
1
Views
827
  • Linear and Abstract Algebra
Replies
2
Views
908
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Quantum Physics
Replies
11
Views
1K
Replies
3
Views
1K
Back
Top