Submodule Generated by Family of Submodules -Simple Question

  • Context: Undergrad 
  • Thread starter Thread starter Math Amateur
  • Start date Start date
Click For Summary
SUMMARY

This discussion centers on Theorem 2.3 from T. S. Blyth's "Module Theory: An Approach to Linear Algebra," specifically regarding the submodule generated by the union of a family of submodules. The theorem asserts that the submodule consists of all finite sums of elements from these submodules. A key clarification provided is that while forming these sums, one can indeed take multiple elements from a single submodule, as long as the total number of summands remains finite. Understanding this concept hinges on the definitions and implications of Theorem 2.2, which restricts the formation of infinite sums.

PREREQUISITES
  • Familiarity with module theory concepts
  • Understanding of linear combinations in algebra
  • Knowledge of finite sets and their properties
  • Basic comprehension of T. S. Blyth's "Module Theory: An Approach to Linear Algebra"
NEXT STEPS
  • Study Theorem 2.2 in Blyth's book for a deeper understanding of linear combinations
  • Explore the implications of finite sums in module theory
  • Research the properties of submodules and their unions
  • Examine examples of linear combinations in various algebraic structures
USEFUL FOR

Mathematicians, students of algebra, and educators seeking clarity on module theory, particularly those interested in the nuances of submodules and linear combinations.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading T. S. Blyth's book "Module Theory: An Approach to Linear Algebra" ... ... and am currently focussed on Chapter 1: Modules, Vector Spaces and Algebras ... ...

I need help with a basic and possibly simple aspect of Theorem 2.3 ...

Since the answer to my question may depend on Blyth's previous definitions and theorems I am providing some relevant text from Blyth prior to Theorem 2.3 ... but those confident with the theory obviously can go straight to the theorem at the bottom of the scanned text ...

Theorem 2.3 together with some relevant prior definitions and theorems reads as follows: (Theorem 2,3 at end of text fragment)
?temp_hash=5f3e424c93906713f608b06c9a1b5136.png

?temp_hash=5f3e424c93906713f608b06c9a1b5136.png

?temp_hash=5f3e424c93906713f608b06c9a1b5136.png
In the above text (near the end) we read, in the statement of Theorem 2.3:

" ... ... then the submodule generated by ##\bigcup_{ i \in I } M_i## consists of all finite sums of the form ##\sum_{ j \in J } m_j## ... ... "The above statement seems to assume we take one element from each ##M_j## in forming the sum ##\sum_{ j \in J } m_j## ... ... but how do we know a linear combination does not take more than one element from a particular ##M_j##, say ##M_{ j_0 }## ... ... or indeed all elements from one particular ##M_j## ... rather than one element from each submodule in the family ##\{ M_i \}_{ i \in I}## ...

Hope someone can clarify this ...

Peter
 

Attachments

  • Blyth - 1 - Theorem 2.3 plus relevant theory ... Page 1  ....png
    Blyth - 1 - Theorem 2.3 plus relevant theory ... Page 1 ....png
    51.8 KB · Views: 847
  • Blyth - 2 - Theorem 2.3 plus relevant theory ... Page 2  ....png
    Blyth - 2 - Theorem 2.3 plus relevant theory ... Page 2 ....png
    51.2 KB · Views: 691
  • Blyth - 3 - Theorem 2.3 plus relevant theory ... Page 3  ....png
    Blyth - 3 - Theorem 2.3 plus relevant theory ... Page 3 ....png
    55.9 KB · Views: 916
Physics news on Phys.org
Math Amateur said:
" ... ... then the submodule generated by ##\bigcup_{ i \in I } M_i## consists of all finite sums of the form ##∑_{j∈J} m_j## ... ... "The above statement seems to assume we take one element from each ##M_j## in forming the sum ##\sum_{ j \in J } m_j## ... ... but how do we know a linear combination does not take more than one element from a particular ##M_j##, say ##M_{ j_0 }## ... ... or indeed all elements from one particular ##M_j## ... rather than one element from each submodule in the family ## \{ M_i \}_{ i \in I}## ...

With ##S=\bigcup_{ i \in I } M_i## we get ##<S> =<\bigcup_{ i \in I } M_i> = LC(\bigcup_{ i \in I } M_i)=LC(S)## by theorem 2.2.
This means that one element ##x## of ##<S>=LC(S)## is a finite sum of any elements ##m_i##. ##LC(S)## allows only finitely many summands ##\neq 0## by definition. A(n) (again finite) linear combination of those elements is still finite. So we may choose a maximal, yet finite subset ##J \subseteq I## such that all summands are ##m_j \in M_j \; (j \in J)## and ##J \in \mathbb{P}^*(I)##. All different (but still finitely many) ##m_j \, , \, m_j^{'}\, , \, m_j^{''}\, , \dots## of a single submodule ##M_j## can be added within ##M_j## and then be called, e.g. ##\overline{m}_j \in M_j##. The union ##J## of all indices whose elements ##\overline{m}_j## take part in the sum that defines ##x## will do the job.

The crucial part of all is, that the sums involved are finite: linear combinations of elements in ##\bigcup_{ i \in I } M_i##, and index sets ##J \in \mathbb{P}^*(I)##. So finite times finite is still finite. Therefore the essential part is to understand theorem 2.2. Esp. that there cannot show up infinite sums out of nowhere.

Remark: There are concepts that deal with infinity here. But these are not meant by the above. (Just in case you might meet those later on.)
 
  • Like
Likes   Reactions: Math Amateur
No
fresh_42 said:
With ##S=\bigcup_{ i \in I } M_i## we get ##<S> =<\bigcup_{ i \in I } M_i> = LC(\bigcup_{ i \in I } M_i)=LC(S)## by theorem 2.2.
This means that one element ##x## of ##<S>=LC(S)## is a finite sum of any elements ##m_i##. ##LC(S)## allows only finitely many summands ##\neq 0## by definition. A(n) (again finite) linear combination of those elements is still finite. So we may choose a maximal, yet finite subset ##J \subseteq I## such that all summands are ##m_j \in M_j \; (j \in J)## and ##J \in \mathbb{P}^*(I)##. All different (but still finitely many) ##m_j \, , \, m_j^{'}\, , \, m_j^{''}\, , \dots## of a single submodule ##M_j## can be added within ##M_j## and then be called, e.g. ##\overline{m}_j \in M_j##. The union ##J## of all indices whose elements ##\overline{m}_j## take part in the sum that defines ##x## will do the job.

The crucial part of all is, that the sums involved are finite: linear combinations of elements in ##\bigcup_{ i \in I } M_i##, and index sets ##J \in \mathbb{P}^*(I)##. So finite times finite is still finite. Therefore the essential part is to understand theorem 2.2. Esp. that there cannot show up infinite sums out of nowhere.

Remark: There are concepts that deal with infinity here. But these are not meant by the above. (Just in case you might meet those later on.)
Thanks for the help, fresh_42 ... the matter is now clear thanks to you ...

Appreciate the help ...

Peter
 

Similar threads

Replies
1
Views
1K
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
1
Views
1K
Replies
2
Views
2K