# Submodule Generated by Family of Submodules -Simple Question

Gold Member
I am reading T. S. Blyth's book "Module Theory: An Approach to Linear Algebra" ... ... and am currently focussed on Chapter 1: Modules, Vector Spaces and Algebras ... ...

I need help with a basic and possibly simple aspect of Theorem 2.3 ...

Since the answer to my question may depend on Blyth's previous definitions and theorems I am providing some relevant text from Blyth prior to Theorem 2.3 ... but those confident with the theory obviously can go straight to the theorem at the bottom of the scanned text ...

Theorem 2.3 together with some relevant prior definitions and theorems reads as follows: (Theorem 2,3 at end of text fragment)

In the above text (near the end) we read, in the statement of Theorem 2.3:

" ... ... then the submodule generated by ##\bigcup_{ i \in I } M_i## consists of all finite sums of the form ##\sum_{ j \in J } m_j## ... ... "

The above statement seems to assume we take one element from each ##M_j## in forming the sum ##\sum_{ j \in J } m_j## ... ... but how do we know a linear combination does not take more than one element from a particular ##M_j##, say ##M_{ j_0 }## ... ... or indeed all elements from one particular ##M_j## ... rather than one element from each submodule in the family ##\{ M_i \}_{ i \in I}## ...

Hope someone can clarify this ...

Peter

#### Attachments

• 121 KB Views: 642
• 98 KB Views: 506
• 142.1 KB Views: 702

Related Linear and Abstract Algebra News on Phys.org
fresh_42
Mentor
" ... ... then the submodule generated by ##\bigcup_{ i \in I } M_i## consists of all finite sums of the form ##∑_{j∈J} m_j## ... ... "

The above statement seems to assume we take one element from each ##M_j## in forming the sum ##\sum_{ j \in J } m_j## ... ... but how do we know a linear combination does not take more than one element from a particular ##M_j##, say ##M_{ j_0 }## ... ... or indeed all elements from one particular ##M_j## ... rather than one element from each submodule in the family ## \{ M_i \}_{ i \in I}## ...
With ##S=\bigcup_{ i \in I } M_i## we get ##<S> =<\bigcup_{ i \in I } M_i> = LC(\bigcup_{ i \in I } M_i)=LC(S)## by theorem 2.2.
This means that one element ##x## of ##<S>=LC(S)## is a finite sum of any elements ##m_i##. ##LC(S)## allows only finitely many summands ##\neq 0## by definition. A(n) (again finite) linear combination of those elements is still finite. So we may choose a maximal, yet finite subset ##J \subseteq I## such that all summands are ##m_j \in M_j \; (j \in J)## and ##J \in \mathbb{P}^*(I)##. All different (but still finitely many) ##m_j \, , \, m_j^{'}\, , \, m_j^{''}\, , \dots## of a single submodule ##M_j## can be added within ##M_j## and then be called, e.g. ##\overline{m}_j \in M_j##. The union ##J## of all indices whose elements ##\overline{m}_j## take part in the sum that defines ##x## will do the job.

The crucial part of all is, that the sums involved are finite: linear combinations of elements in ##\bigcup_{ i \in I } M_i##, and index sets ##J \in \mathbb{P}^*(I)##. So finite times finite is still finite. Therefore the essential part is to understand theorem 2.2. Esp. that there cannot show up infinite sums out of nowhere.

Remark: There are concepts that deal with infinity here. But these are not meant by the above. (Just in case you might meet those later on.)

Math Amateur
Gold Member
No
With ##S=\bigcup_{ i \in I } M_i## we get ##<S> =<\bigcup_{ i \in I } M_i> = LC(\bigcup_{ i \in I } M_i)=LC(S)## by theorem 2.2.
This means that one element ##x## of ##<S>=LC(S)## is a finite sum of any elements ##m_i##. ##LC(S)## allows only finitely many summands ##\neq 0## by definition. A(n) (again finite) linear combination of those elements is still finite. So we may choose a maximal, yet finite subset ##J \subseteq I## such that all summands are ##m_j \in M_j \; (j \in J)## and ##J \in \mathbb{P}^*(I)##. All different (but still finitely many) ##m_j \, , \, m_j^{'}\, , \, m_j^{''}\, , \dots## of a single submodule ##M_j## can be added within ##M_j## and then be called, e.g. ##\overline{m}_j \in M_j##. The union ##J## of all indices whose elements ##\overline{m}_j## take part in the sum that defines ##x## will do the job.

The crucial part of all is, that the sums involved are finite: linear combinations of elements in ##\bigcup_{ i \in I } M_i##, and index sets ##J \in \mathbb{P}^*(I)##. So finite times finite is still finite. Therefore the essential part is to understand theorem 2.2. Esp. that there cannot show up infinite sums out of nowhere.

Remark: There are concepts that deal with infinity here. But these are not meant by the above. (Just in case you might meet those later on.)

Thanks for the help, fresh_42 ... the matter is now clear thanks to you ...

Appreciate the help ...

Peter