MHB K[T]-Modules and Block Forms of Matrices

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading An Introduction to Rings and Modules With K-Theory in View by A.J. Berrick and M.E. Keating (B&K).

I need help with Exercise 1.2.9 (a) ...

Exercise 1.2.9 (a) reads as follows:https://www.physicsforums.com/attachments/5101I am somewhat overwhelmed by this exercise ... can someone help me to get a start on this exercise ... ?

Hope someone can help ...

Some preliminary thoughts ... ...

$$A = \begin{pmatrix} k_{11} & ... & ... & ... & k_{1n} \\ . & & & & . \\ . & & & & . \\ . & & & & . \\ k_{n1} & ... & ... & ... & k_{nn} \end{pmatrix} $$ where $$k_{ij} \in \mathcal{K}$$
Now, $$M$$ is a $$\mathcal{K} [T]$$-module, so ... ...

... if $$m \in M$$ then the (right) action of the ring $$\mathcal{K} [T]$$ on the module $$M$$ is ...

$$m \cdot f(T) = m f_0 + Axf_1 + A^2 x f_2 + \ ... \ ... \ + A^r m f_r $$ ... ... ... (1)

where

$$f(T) = f_0 + f_1 T + f_2T^2 + \ ... \ ... \ + f_r T^r$$ ... ... ... (2)

(Note: despite some help on this issue I still do not completely understand how the $$A^n$$ end up on the left in (1) above ... can someone please help?)Now ... proceeding ... ... $$L$$ is a submodule of $$M$$ ... BUT ... what does it mean that $$L$$ has a subspace $$U$$ ... what is being said here ...?
Can someone please help me to proceed ... ?

Help will be much appreciated ... ...

Peter
 
Last edited:
Physics news on Phys.org
The $A$'s end up on the left because to get an nx1 matrix out of the product of an nxn matrix and an nx1 matrix, the nxn matrix has to be multiplied on the left.

If $L$ is a submodule of $M$, it follows that $(L,+)$ is an abelian subgroup of $(M,+) = (\mathcal{K}^n,+)$.

So the SET $L$ (which is non-empty since $0_M \in L$) only lacks closure under scalar multiplication to be a subspace of $\mathcal{K}^n$.

But this is clear, since we can regard scalar multiplication as the $\mathcal{K}[T]$-action of a constant polynomial. I assume B&K call it $U$ just to remind you it is a DIFFERENT structure than the $\mathcal{K}[T]$-submodule $L$.

One of the consequences of $L$ being a submodule of $M$, is that $U$ is an $A$-invariant subspace of $\mathcal{K}^n$, that is:

$A(U) \subseteq U$.

So if we write:

$v = u_1\alpha_1 +\cdots + u_k\alpha_k + v_1\alpha_{k+1} +\cdots + v_{n-k}\alpha_n$, with the $u_i \in U$ and the $v_j \in \mathcal{K}^n - U$, so that we have a basis $\{u_1,\dots,u_k,v_1,\dots,v_{n-k}\}$ it follows that:

$Av = u + w$, where $u \in U$ (it's hard to say anything meaningful about $w$, that's why we have the block $C$).

So what you want is a matrix $\Gamma$ that takes this basis, to the standard basis.

A lot of what is overwhelming you in these exercises, is a limited exposure to linear algebra. Modules are a generalization of vector spaces, and also of abelian groups. While I believe the abelian group aspect is something you're comfortable with, trying to "dive into modules" without a firm grasp of linear algebra is ill-advised. I recommend you invest some time into reading a first-rate linear algebra text before tackling this material.
 
Deveno said:
The $A$'s end up on the left because to get an nx1 matrix out of the product of an nxn matrix and an nx1 matrix, the nxn matrix has to be multiplied on the left.

If $L$ is a submodule of $M$, it follows that $(L,+)$ is an abelian subgroup of $(M,+) = (\mathcal{K}^n,+)$.

So the SET $L$ (which is non-empty since $0_M \in L$) only lacks closure under scalar multiplication to be a subspace of $\mathcal{K}^n$.

But this is clear, since we can regard scalar multiplication as the $\mathcal{K}[T]$-action of a constant polynomial. I assume B&K call it $U$ just to remind you it is a DIFFERENT structure than the $\mathcal{K}[T]$-submodule $L$.

One of the consequences of $L$ being a submodule of $M$, is that $U$ is an $A$-invariant subspace of $\mathcal{K}^n$, that is:

$A(U) \subseteq U$.

So if we write:

$v = u_1\alpha_1 +\cdots + u_k\alpha_k + v_1\alpha_{k+1} +\cdots + v_{n-k}\alpha_n$, with the $u_i \in U$ and the $v_j \in \mathcal{K}^n - U$, so that we have a basis $\{u_1,\dots,u_k,v_1,\dots,v_{n-k}\}$ it follows that:

$Av = u + w$, where $u \in U$ (it's hard to say anything meaningful about $w$, that's why we have the block $C$).

So what you want is a matrix $\Gamma$ that takes this basis, to the standard basis.

A lot of what is overwhelming you in these exercises, is a limited exposure to linear algebra. Modules are a generalization of vector spaces, and also of abelian groups. While I believe the abelian group aspect is something you're comfortable with, trying to "dive into modules" without a firm grasp of linear algebra is ill-advised. I recommend you invest some time into reading a first-rate linear algebra text before tackling this material.
Thanks Deveno ...

... will be going through the details of your post shortly ...

Thanks for the advice re linear algebra ... I have a couple of linear algebra texts with me in Victoria (I travel heavy ... due to math texts ... :) ) ...

I suppose the main topics I need to cover are Vector Spaces and Linear Transformations ... and maybe bilinear maps and forms ... what do you think? ... are there any other topics that are really important?

Peter
 
Deveno said:
The $A$'s end up on the left because to get an nx1 matrix out of the product of an nxn matrix and an nx1 matrix, the nxn matrix has to be multiplied on the left.

If $L$ is a submodule of $M$, it follows that $(L,+)$ is an abelian subgroup of $(M,+) = (\mathcal{K}^n,+)$.

So the SET $L$ (which is non-empty since $0_M \in L$) only lacks closure under scalar multiplication to be a subspace of $\mathcal{K}^n$.

But this is clear, since we can regard scalar multiplication as the $\mathcal{K}[T]$-action of a constant polynomial. I assume B&K call it $U$ just to remind you it is a DIFFERENT structure than the $\mathcal{K}[T]$-submodule $L$.

One of the consequences of $L$ being a submodule of $M$, is that $U$ is an $A$-invariant subspace of $\mathcal{K}^n$, that is:

$A(U) \subseteq U$.

So if we write:

$v = u_1\alpha_1 +\cdots + u_k\alpha_k + v_1\alpha_{k+1} +\cdots + v_{n-k}\alpha_n$, with the $u_i \in U$ and the $v_j \in \mathcal{K}^n - U$, so that we have a basis $\{u_1,\dots,u_k,v_1,\dots,v_{n-k}\}$ it follows that:

$Av = u + w$, where $u \in U$ (it's hard to say anything meaningful about $w$, that's why we have the block $C$).

So what you want is a matrix $\Gamma$ that takes this basis, to the standard basis.

A lot of what is overwhelming you in these exercises, is a limited exposure to linear algebra. Modules are a generalization of vector spaces, and also of abelian groups. While I believe the abelian group aspect is something you're comfortable with, trying to "dive into modules" without a firm grasp of linear algebra is ill-advised. I recommend you invest some time into reading a first-rate linear algebra text before tackling this material.
Hi Deveno,

I will very shortly switch from rings and modules and begin to review/learn the basics of linear algebra ...

... just a quick clarifying question about your post above ... You write:

"... ... $v = u_1\alpha_1 +\cdots + u_k\alpha_k + v_1\alpha_{k+1} +\cdots + v_{n-k}\alpha_n$ ... ... So ... presumably, in this equation, $$v$$ is an arbitrary column vector in $$\mathcal{K}^n$$ ... ... that is, $$v$$ is any vector in $$\mathcal{K}^n$$ ... and the dimension of $$v$$ is $$(n \times 1)$$ ...

The dimension of $$u$$ is also $$(n \times 1)$$ ... as is the dimension of the $$v_j$$ ...

... BUT ...

... what exactly are the $$\alpha_i$$ ... and what is their dimension ... and how do we know that the $$\alpha_i$$ exist such that $v = u_1\alpha_1 +\cdots + u_k\alpha_k + v_1\alpha_{k+1} +\cdots + v_{n-k}\alpha_n$
A simpler and more basic question is how did you arrive at the equation:

$v = u_1\alpha_1 +\cdots + u_k\alpha_k + v_1\alpha_{k+1} +\cdots + v_{n-k}\alpha_n$
I know that these are very basic questions ... but i hope you can help ...

Peter
 
Peter said:
Hi Deveno,

I will very shortly switch from rings and modules and begin to review/learn the basics of linear algebra ...

... just a quick clarifying question about your post above ... You write:

"... ... $v = u_1\alpha_1 +\cdots + u_k\alpha_k + v_1\alpha_{k+1} +\cdots + v_{n-k}\alpha_n$ ... ... So ... presumably, in this equation, $$v$$ is an arbitrary column vector in $$\mathcal{K}^n$$ ... ... that is, $$v$$ is any vector in $$\mathcal{K}^n$$ ... and the dimension of $$v$$ is $$(n \times 1)$$ ...

The dimension of $$u$$ is also $$(n \times 1)$$ ... as is the dimension of the $$v_j$$ ...

... BUT ...

... what exactly are the $$\alpha_i$$ ... and what is their dimension ... and how do we know that the $$\alpha_i$$ exist such that $v = u_1\alpha_1 +\cdots + u_k\alpha_k + v_1\alpha_{k+1} +\cdots + v_{n-k}\alpha_n$
A simpler and more basic question is how did you arrive at the equation:

$v = u_1\alpha_1 +\cdots + u_k\alpha_k + v_1\alpha_{k+1} +\cdots + v_{n-k}\alpha_n$
I know that these are very basic questions ... but i hope you can help ...

Peter

No, $U$ is a subspace of $\mathcal{K}^n$, and has dimension $0 \leq k \leq n$. It follows by the version of the Fundamental Homomorphism Theorem for Vector Spaces (aka the rank-nullity theorem) that $\mathcal{K^n}/U$ has dimension $n-k$.

I am merely following the hint given in the problem: picking a basis for $U$, and extending it to a basis for all of $\mathcal{K}^n$. It's a pretty basic theorem of linear algebra, called, for some strange reason, the "Basis Extension Theorem" (gotta love the guy who dreams these things up).
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top