Undergrad Submodule Generated by Family of Submodules -Simple Question

  • Thread starter Thread starter Math Amateur
  • Start date Start date
Click For Summary
The discussion revolves around a question regarding Theorem 2.3 from T. S. Blyth's "Module Theory: An Approach to Linear Algebra." The main concern is whether the theorem's statement implies that linear combinations can only take one element from each submodule in a family, or if it allows for multiple elements from the same submodule. Clarifications indicate that the theorem's framework ensures that only finite sums are considered, thus preventing infinite combinations from occurring. The importance of understanding Theorem 2.2 is emphasized, as it establishes the finite nature of the sums involved. The inquiry is resolved, affirming the clarity of the concepts discussed.
Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading T. S. Blyth's book "Module Theory: An Approach to Linear Algebra" ... ... and am currently focussed on Chapter 1: Modules, Vector Spaces and Algebras ... ...

I need help with a basic and possibly simple aspect of Theorem 2.3 ...

Since the answer to my question may depend on Blyth's previous definitions and theorems I am providing some relevant text from Blyth prior to Theorem 2.3 ... but those confident with the theory obviously can go straight to the theorem at the bottom of the scanned text ...

Theorem 2.3 together with some relevant prior definitions and theorems reads as follows: (Theorem 2,3 at end of text fragment)
?temp_hash=5f3e424c93906713f608b06c9a1b5136.png

?temp_hash=5f3e424c93906713f608b06c9a1b5136.png

?temp_hash=5f3e424c93906713f608b06c9a1b5136.png
In the above text (near the end) we read, in the statement of Theorem 2.3:

" ... ... then the submodule generated by ##\bigcup_{ i \in I } M_i## consists of all finite sums of the form ##\sum_{ j \in J } m_j## ... ... "The above statement seems to assume we take one element from each ##M_j## in forming the sum ##\sum_{ j \in J } m_j## ... ... but how do we know a linear combination does not take more than one element from a particular ##M_j##, say ##M_{ j_0 }## ... ... or indeed all elements from one particular ##M_j## ... rather than one element from each submodule in the family ##\{ M_i \}_{ i \in I}## ...

Hope someone can clarify this ...

Peter
 

Attachments

  • Blyth - 1 - Theorem 2.3 plus relevant theory ... Page 1  ....png
    Blyth - 1 - Theorem 2.3 plus relevant theory ... Page 1 ....png
    51.8 KB · Views: 836
  • Blyth - 2 - Theorem 2.3 plus relevant theory ... Page 2  ....png
    Blyth - 2 - Theorem 2.3 plus relevant theory ... Page 2 ....png
    51.2 KB · Views: 682
  • Blyth - 3 - Theorem 2.3 plus relevant theory ... Page 3  ....png
    Blyth - 3 - Theorem 2.3 plus relevant theory ... Page 3 ....png
    55.9 KB · Views: 905
Physics news on Phys.org
Math Amateur said:
" ... ... then the submodule generated by ##\bigcup_{ i \in I } M_i## consists of all finite sums of the form ##∑_{j∈J} m_j## ... ... "The above statement seems to assume we take one element from each ##M_j## in forming the sum ##\sum_{ j \in J } m_j## ... ... but how do we know a linear combination does not take more than one element from a particular ##M_j##, say ##M_{ j_0 }## ... ... or indeed all elements from one particular ##M_j## ... rather than one element from each submodule in the family ## \{ M_i \}_{ i \in I}## ...

With ##S=\bigcup_{ i \in I } M_i## we get ##<S> =<\bigcup_{ i \in I } M_i> = LC(\bigcup_{ i \in I } M_i)=LC(S)## by theorem 2.2.
This means that one element ##x## of ##<S>=LC(S)## is a finite sum of any elements ##m_i##. ##LC(S)## allows only finitely many summands ##\neq 0## by definition. A(n) (again finite) linear combination of those elements is still finite. So we may choose a maximal, yet finite subset ##J \subseteq I## such that all summands are ##m_j \in M_j \; (j \in J)## and ##J \in \mathbb{P}^*(I)##. All different (but still finitely many) ##m_j \, , \, m_j^{'}\, , \, m_j^{''}\, , \dots## of a single submodule ##M_j## can be added within ##M_j## and then be called, e.g. ##\overline{m}_j \in M_j##. The union ##J## of all indices whose elements ##\overline{m}_j## take part in the sum that defines ##x## will do the job.

The crucial part of all is, that the sums involved are finite: linear combinations of elements in ##\bigcup_{ i \in I } M_i##, and index sets ##J \in \mathbb{P}^*(I)##. So finite times finite is still finite. Therefore the essential part is to understand theorem 2.2. Esp. that there cannot show up infinite sums out of nowhere.

Remark: There are concepts that deal with infinity here. But these are not meant by the above. (Just in case you might meet those later on.)
 
  • Like
Likes Math Amateur
No
fresh_42 said:
With ##S=\bigcup_{ i \in I } M_i## we get ##<S> =<\bigcup_{ i \in I } M_i> = LC(\bigcup_{ i \in I } M_i)=LC(S)## by theorem 2.2.
This means that one element ##x## of ##<S>=LC(S)## is a finite sum of any elements ##m_i##. ##LC(S)## allows only finitely many summands ##\neq 0## by definition. A(n) (again finite) linear combination of those elements is still finite. So we may choose a maximal, yet finite subset ##J \subseteq I## such that all summands are ##m_j \in M_j \; (j \in J)## and ##J \in \mathbb{P}^*(I)##. All different (but still finitely many) ##m_j \, , \, m_j^{'}\, , \, m_j^{''}\, , \dots## of a single submodule ##M_j## can be added within ##M_j## and then be called, e.g. ##\overline{m}_j \in M_j##. The union ##J## of all indices whose elements ##\overline{m}_j## take part in the sum that defines ##x## will do the job.

The crucial part of all is, that the sums involved are finite: linear combinations of elements in ##\bigcup_{ i \in I } M_i##, and index sets ##J \in \mathbb{P}^*(I)##. So finite times finite is still finite. Therefore the essential part is to understand theorem 2.2. Esp. that there cannot show up infinite sums out of nowhere.

Remark: There are concepts that deal with infinity here. But these are not meant by the above. (Just in case you might meet those later on.)
Thanks for the help, fresh_42 ... the matter is now clear thanks to you ...

Appreciate the help ...

Peter
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

Replies
2
Views
2K
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
1
Views
1K
Replies
2
Views
2K