K^n as a K[T]-module - Example 2.1.2

  • Context: MHB 
  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Example
Click For Summary
SUMMARY

The discussion focuses on Example 2.1.2 (ii) from "An Introduction to Rings and Modules With K-Theory in View" by A.J. Berrick and M.E. Keating, specifically regarding the module structure of \( V = K^n \) over the polynomial ring \( K[T] \). The key point is that \( V \) can be expressed as a direct sum of \( K[T] \)-submodules \( U \) and \( W \) when the matrix \( A \) is in the block form \( \begin{pmatrix} B & 0 \\ 0 & D \end{pmatrix} \), where \( B \) is an \( r \times r \) matrix and \( D \) is an \( (n - r) \times (n - r) \) matrix. The verification of this decomposition requires ensuring that \( U \) and \( W \) are closed under the \( K[T] \)-action, which involves analyzing the action of the matrix \( A \) on elements of \( U \) and \( W \).

PREREQUISITES
  • Understanding of \( K[T] \)-modules and their properties
  • Familiarity with matrix operations and block matrices
  • Knowledge of vector spaces and direct sums
  • Basic concepts of polynomial rings and their actions on vector spaces
NEXT STEPS
  • Study the properties of \( K[T] \)-modules in detail
  • Learn about block matrices and their applications in linear algebra
  • Investigate the closure properties of submodules under polynomial actions
  • Explore further examples of direct sums in module theory
USEFUL FOR

Mathematicians, graduate students in algebra, and anyone studying module theory and linear algebra, particularly those interested in the applications of polynomial rings in module decomposition.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading An Introduction to Rings and Modules With K-Theory in View by A.J. Berrick and M.E. Keating (B&K).

I need help with understanding Example 2.1.2 (ii) (page 39) which concerns $$V = K^n$$ viewed as a module over the polynomial ring $$K[T]$$.

Example 2.1.2 (ii) (page 39) reads as follows:View attachment 2965In the above text by B&K we read:

" ... ... it is easy to verify that the decomposition $$V = U \oplus W$$ expresses $$V$$ as a direct sum of $$K[T]$$-submodules precisely when $$A = \left(\begin{array}{cc}B&0\\0&D\end{array}\right)$$

with $$B$$ an $$r \times r$$ matrix

and

$$D$$ an $$(n - r) \times (n - r)$$ matrix, $$B$$ and $$D$$ giving the action of $$T$$ on $$U$$ and $$W$$ respectively. ... ..."

I am trying to formally and rigorously verify this statement, but am unsure how to approach this task. Can someone please help me to get started on this verification ... ?

------------------------------------------------

Other relevant text in B&K that MHB members may need to interpret and understand the above example follows.

B&K's notation for polynomial rings is as follows:

View attachment 2966
B&K's definition of a module is as follows:
View attachment 2967
View attachment 2968
B&K's explanation and notation for $$K^n$$ as a right module over $$K[T|$$ is as follows:View attachment 2969
 
Last edited:
Physics news on Phys.org
Well, first let's look at what we need to happen for $\mathcal{K}^n$ to be the direct sum of $U$ and $V$ as $\mathcal{K}[T]$-modules.

First of all, we need $U$ and $V$ to act as $\mathcal{K}[T]$-submodules.

The closure under addition is clear: as vector subspaces, both $U$ and $V$ are abelian groups, and thereby closed under addition.

So what we need to do is verify that they are likewise closed under the $\mathcal{K}[T]$-action, that:

$u \cdot f(T) \in U$ for all $f(T) \in \mathcal{K}[T]$ (a similar consideration holds for $V$).

So we need $Au \in U$. This will ensure that $A^tu \in U$, and therefore that:

$A^tuf_j \in U$, and so (adding all the terms) $u \cdot f(T) \in U$.

If we write $A$ in block form, this ($Au \in U$) becomes:

$\begin{bmatrix}B&H\\K&D \end{bmatrix} \begin{bmatrix}u\\0 \end{bmatrix} = \begin{bmatrix}u'\\0 \end{bmatrix}$

To achieve this, we must have $Ku + D0 = Ku = 0$, for ALL $u \in U$. So $K$ is the 0-block.

A similar analysis with $V$ shows $H$ must be the 0-block.

Note that $U + W = V$ considered purely as abelian groups. Furthermore, note that:

$u \cdot 1_{\mathcal{K}[T]} = Iu\cdot 1 = u$, and similarly for $V$, so as $\mathcal{K}[T]$-modules these are non-zero (this is true even if the matrix $A$ is the 0-matrix, since the action of constant polynomials does not have any $A^tu$ terms).

Finally, since $U \cap W = \{0_V\}$ (since we have a direct sum of vector spaces), this is still true when we consider them as $\mathcal{K}[T]$-modules. So (DS1) and (DS2) are satisfied, we have a direct sum as modules.

(in my opinion this flows better with a left-action, but it's "essentially" the same).
 
Deveno said:
Well, first let's look at what we need to happen for $\mathcal{K}^n$ to be the direct sum of $U$ and $V$ as $\mathcal{K}[T]$-modules.

First of all, we need $U$ and $V$ to act as $\mathcal{K}[T]$-submodules.

The closure under addition is clear: as vector subspaces, both $U$ and $V$ are abelian groups, and thereby closed under addition.

So what we need to do is verify that they are likewise closed under the $\mathcal{K}[T]$-action, that:

$u \cdot f(T) \in U$ for all $f(T) \in \mathcal{K}[T]$ (a similar consideration holds for $V$).

So we need $Au \in U$. This will ensure that $A^tu \in U$, and therefore that:

$A^tuf_j \in U$, and so (adding all the terms) $u \cdot f(T) \in U$.

If we write $A$ in block form, this ($Au \in U$) becomes:

$\begin{bmatrix}B&H\\K&D \end{bmatrix} \begin{bmatrix}u\\0 \end{bmatrix} = \begin{bmatrix}u'\\0 \end{bmatrix}$

To achieve this, we must have $Ku + D0 = Ku = 0$, for ALL $u \in U$. So $K$ is the 0-block.

A similar analysis with $V$ shows $H$ must be the 0-block.

Note that $U + W = V$ considered purely as abelian groups. Furthermore, note that:

$u \cdot 1_{\mathcal{K}[T]} = Iu\cdot 1 = u$, and similarly for $V$, so as $\mathcal{K}[T]$-modules these are non-zero (this is true even if the matrix $A$ is the 0-matrix, since the action of constant polynomials does not have any $A^tu$ terms).

Finally, since $U \cap W = \{0_V\}$ (since we have a direct sum of vector spaces), this is still true when we consider them as $\mathcal{K}[T]$-modules. So (DS1) and (DS2) are satisfied, we have a direct sum as modules.

(in my opinion this flows better with a left-action, but it's "essentially" the same).
Thanks Deveno ... but I need your help in order to clarify some of the mechanics of the $\mathcal{K}[T]$-actions for $$U$$ and $$V$$ ...

I can see that $$U$$ and $$V$$ are both abelian groups under addition and are therefore closed under addition, but as I have indicated above I am having trouble understanding the mechanics of the $\mathcal{K}[T]$-actions for $$U$$ and $$V$$ ... hope you can help ...
I will explain my difficulties by focusing on $$ U = \mathcal{K}^r$$ ... the same considerations apply to $$ V = \mathcal{K}^{n-r} $$ ... ...

Now, consider the action $$u \bullet f(T)$$ ... ...

$$ u \bullet f(T) = u \bullet (f_0 + f_1T + f_2T^2 + ... \ ... + f_rT^r ) $$

Therefore, by the definition of the action we have:

$$ u \bullet f(T) = uf_0 + Auf_1 + A^2uf_2 + ... \ ... + A^ruf_r $$

Now consider the term $$uf_0$$ in the above expression ...

Let $$u = \begin{pmatrix} u_1 \\ . \\ . \\ . \\ u_r \end{pmatrix}$$, $$f_0 = \begin{pmatrix} f_{10} \\ f_{20} \\ . \\ . \\ . \\ f_{n0} \end{pmatrix}$$

... so how do we calculate/form $$uf_0$$?

Similarly $$A$$ is $$(n \times n)$$ , $$u$$ is $$(r \times 1)$$, and $$f$$ is $$(n \times 1)$$ ...

so then how do we calculate/form $$ Auf_1 $$ ... and so on?

Hope you can help ...

Peter
 
Last edited:
Peter said:
Thanks Deveno ... but I need your help in order to clarify some of the mechanics of the $\mathcal{K}[T]$-actions for $$U$$ and $$V$$ ...

I can see that $$U$$ and $$V$$ are both abelian groups under addition and are therefore closed under addition, but as I have indicated above I am having trouble understanding the mechanics of the $\mathcal{K}[T]$-actions for $$U$$ and $$V$$ ... hope you can help ...
I will explain my difficulties by focusing on $$ U = \mathcal{K}^r$$ ... the same considerations apply to $$ V = \mathcal{K}^{n-r} $$ ... ...

Now, consider the action $$u \bullet f(T)$$ ... ...

$$ u \bullet f(T) = u \bullet (f_0 + f_1T + f_2T^2 + ... \ ... + f_rT^r ) $$

Therefore, by the definition of the action we have:

$$ u \bullet f(T) = uf_0 + Auf_1 + A^2uf_2 + ... \ ... + A^ruf_r $$

Now consider the term $$uf_0$$ in the above expression ...

Let $$u = \begin{pmatrix} u_1 \\ . \\ . \\ . \\ u_r \end{pmatrix}$$, $$f_0 = \begin{pmatrix} f_{10} \\ f_{20} \\ . \\ . \\ . \\ f_{n0} \end{pmatrix}$$

... so how do we calculate/form $$uf_0$$?

Similarly $$A$$ is $$(n \times n)$$ , $$u$$ is $$(r \times 1)$$, and $$f$$ is $$(n \times 1)$$ ...

so then how do we calculate/form $$ Auf_1 $$ ... and so on?

Hope you can help ...

Peter
$f \in \mathcal{K}[T]$, so when we write:

$f(T) = f_0 + f_1T + \cdots + f_nT^n$, each of the $f_j \in \mathcal{K}$, these are just field elements.

Now in our given basis for $\mathcal{K}^n$, a typical $u \in U$ looks like:

$u = \begin{pmatrix}u_1\\u_2\\ \vdots\\u_r\\0\\0\\ \vdots\\0 \end{pmatrix}$

This is an $n \times 1$ matrix, and $A$ is an $n \times n$ matrix, so $Au$ is an $n \times 1$ matrix.

$Auf_1$ is just the $n \times 1$ matrix where every entry of $Au$ is multiplied by the coefficient $f_1$ of $T$ in the polynomial $f(T)$ (we're only writing it on the right so we get a right-action).
 

Similar threads

Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
980
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
Replies
1
Views
1K