MHB Polynomials Acting on Spaces - B&K Ex. 1.2.2 (iv): An Intro by Peter

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading An Introduction to Rings and Modules With K-Theory in View by A.J. Berrick and M.E. Keating (B&K).

I need help in order to fully understand Example 1.2.2 (iv) [page 16] ... indeed, I am somewhat overwhelmed by this construction ... ...

Example 1.2.2 (iv) reads as follows:View attachment 5089My question is as follows:

Why do Berrick and Keating bother to use the indeterminate $$T$$ in the above ... why not just use $$f(A)$$ ... ? what is the point of $$T$$ in the above example ...?

By the way ... I am assuming that $$f_0, f_1, \ ... \ ... \ f_r$$ are just elements of $$\mathcal{K}$$ ... ... is that correct?

Hope someone can help ...

Peter*** EDIT ***

It may make sense if we think of the polynomial $$f \in \mathcal{K} [T]$$ being evaluated at $$A$$ ... BUT ... when we evaluate a polynomial in $$\mathcal{K} [T]$$, don't we take values of $$T$$ in $$\mathcal{K}$$ ... ... but ... problem ... $$A$$ is an $$n \times n$$ matrix and hence (of course) $$A \notin \mathcal{K}$$ ... ?

... anyway, hope someone can explain exactly how the construction in this example "works" ...

Peter
 
Last edited:
Physics news on Phys.org
Given a polynomial in $\mathcal{K}[T]$, say:

$f(T) = f_0 + f_1T + \cdots + f_rT^r$

$T$ is an *indeterminate*, and it is possible to have such $f$ of arbitrarily high degree.

However, in the expression:

$f(A) = If_0 + Af_1 + \cdots + A^rf_r$ (here, we write the $f_j$ on the right, since we are viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module)

it turns out that (for a field $\mathcal{K}$) that the matrix $A$ is actually *algebraic* over $\mathcal{K}$, so that:

$\mathcal{K}[A]$ is a *quotient* of $\mathcal{K}[T]$.

As you may recall, when one has a ring-homomorphism:

$\phi:R \to S$, and an $S$-module $M$, one can turn $M$ into an $R$-module like so:

$m\cdot r = m \cdot \phi(r)$ (the RHS is the right $S$-action).

The homomorphism here is:

$\phi: \mathcal{K}[T] \to \mathcal{K}[A]$,

and since $A \in \text{Hom}_{\mathcal{K}}(\mathcal{K}^n,\mathcal{K}^n)$, we have a natural action of $\mathcal{K}[A]$ on $\mathcal{K}^n$ defined by:

$x\cdot f(A) = (f(A))(x)$.

We then set $x\cdot f(T) = x\cdot \phi(f(T))$ (note $\phi(f(T))$ may have much lower degree than $f$, because if we have $m(A) = 0$, and:

$f(T) =q(T)m(T) + r(T)$ with $\text{deg }r < \text{deg }m$, it follows that:

$f(A) = r(A)$).

What happens, in actual practice, is that one determines the minimal polynomial of $A$ (if $A$ has $n$ distinct eigenvalues this will be the same as the characteristic polynomial $\det(IT - A)$). Knowing the degree of this allows us to choose a *basis* (over $\mathcal{K}$) for $\mathcal{K}[A]$which means we only have to compute a finite number of powers of $A$ to know the action of *any* polynomial $f[T]$ upon a vector $x$.

Note that we get various $\mathcal{K}[T]$-modules this way, depending on *which* matrix $A$ we use. So this tells us more about $A$ than it does about the space $\mathcal{K}^n$ or the polynomial ring $\mathcal{K}[T]$ (although, depending on which field $\mathcal{K}$ we use, we will *also* get different modules, because the minimal polynomial of a matrix can depend on the field being used).

It's not really fair to say "we substitute $A$ for $T$". Polynomial expressions in matrices are a bit different that polynomial expressions in field elements (unless the matrices are 1x1).
 
Deveno said:
Given a polynomial in $\mathcal{K}[T]$, say:

$f(T) = f_0 + f_1T + \cdots + f_rT^r$

$T$ is an *indeterminate*, and it is possible to have such $f$ of arbitrarily high degree.

However, in the expression:

$f(A) = If_0 + Af_1 + \cdots + A^rf_r$ (here, we write the $f_j$ on the right, since we are viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module)

it turns out that (for a field $\mathcal{K}$) that the matrix $A$ is actually *algebraic* over $\mathcal{K}$, so that:

$\mathcal{K}[A]$ is a *quotient* of $\mathcal{K}[T]$.

As you may recall, when one has a ring-homomorphism:

$\phi:R \to S$, and an $S$-module $M$, one can turn $M$ into an $R$-module like so:

$m\cdot r = m \cdot \phi(r)$ (the RHS is the right $S$-action).

The homomorphism here is:

$\phi: \mathcal{K}[T] \to \mathcal{K}[A]$,

and since $A \in \text{Hom}_{\mathcal{K}}(\mathcal{K}^n,\mathcal{K}^n)$, we have a natural action of $\mathcal{K}[A]$ on $\mathcal{K}^n$ defined by:

$x\cdot f(A) = (f(A))(x)$.

We then set $x\cdot f(T) = x\cdot \phi(f(T))$ (note $\phi(f(T))$ may have much lower degree than $f$, because if we have $m(A) = 0$, and:

$f(T) =q(T)m(T) + r(T)$ with $\text{deg }r < \text{deg }m$, it follows that:

$f(A) = r(A)$).

What happens, in actual practice, is that one determines the minimal polynomial of $A$ (if $A$ has $n$ distinct eigenvalues this will be the same as the characteristic polynomial $\det(IT - A)$). Knowing the degree of this allows us to choose a *basis* (over $\mathcal{K}$) for $\mathcal{K}[A]$which means we only have to compute a finite number of powers of $A$ to know the action of *any* polynomial $f[T]$ upon a vector $x$.

Note that we get various $\mathcal{K}[T]$-modules this way, depending on *which* matrix $A$ we use. So this tells us more about $A$ than it does about the space $\mathcal{K}^n$ or the polynomial ring $\mathcal{K}[T]$ (although, depending on which field $\mathcal{K}$ we use, we will *also* get different modules, because the minimal polynomial of a matrix can depend on the field being used).

It's not really fair to say "we substitute $A$ for $T$". Polynomial expressions in matrices are a bit different that polynomial expressions in field elements (unless the matrices are 1x1).[/QUOTE

Well ... that has given me a lot to think about ... there is obviously more to the above construction than I was aware of ... glad I asked the question! ... thanks so much Deveno ... really appreciate your help ...

I am now working through your post in detail ... reflecting on all you have written ...

[Sorry to be slow in replying ... had to leave state of Tasmania and travel to regional Victoria about 100 kms outside of Melbourne ... but have arranged Internet connection ... so should be on MHB when I can manage it ...]

Peter
 
Deveno said:
Given a polynomial in $\mathcal{K}[T]$, say:

$f(T) = f_0 + f_1T + \cdots + f_rT^r$

$T$ is an *indeterminate*, and it is possible to have such $f$ of arbitrarily high degree.

However, in the expression:

$f(A) = If_0 + Af_1 + \cdots + A^rf_r$ (here, we write the $f_j$ on the right, since we are viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module)

it turns out that (for a field $\mathcal{K}$) that the matrix $A$ is actually *algebraic* over $\mathcal{K}$, so that:

$\mathcal{K}[A]$ is a *quotient* of $\mathcal{K}[T]$.

As you may recall, when one has a ring-homomorphism:

$\phi:R \to S$, and an $S$-module $M$, one can turn $M$ into an $R$-module like so:

$m\cdot r = m \cdot \phi(r)$ (the RHS is the right $S$-action).

The homomorphism here is:

$\phi: \mathcal{K}[T] \to \mathcal{K}[A]$,

and since $A \in \text{Hom}_{\mathcal{K}}(\mathcal{K}^n,\mathcal{K}^n)$, we have a natural action of $\mathcal{K}[A]$ on $\mathcal{K}^n$ defined by:

$x\cdot f(A) = (f(A))(x)$.

We then set $x\cdot f(T) = x\cdot \phi(f(T))$ (note $\phi(f(T))$ may have much lower degree than $f$, because if we have $m(A) = 0$, and:

$f(T) =q(T)m(T) + r(T)$ with $\text{deg }r < \text{deg }m$, it follows that:

$f(A) = r(A)$).

What happens, in actual practice, is that one determines the minimal polynomial of $A$ (if $A$ has $n$ distinct eigenvalues this will be the same as the characteristic polynomial $\det(IT - A)$). Knowing the degree of this allows us to choose a *basis* (over $\mathcal{K}$) for $\mathcal{K}[A]$which means we only have to compute a finite number of powers of $A$ to know the action of *any* polynomial $f[T]$ upon a vector $x$.

Note that we get various $\mathcal{K}[T]$-modules this way, depending on *which* matrix $A$ we use. So this tells us more about $A$ than it does about the space $\mathcal{K}^n$ or the polynomial ring $\mathcal{K}[T]$ (although, depending on which field $\mathcal{K}$ we use, we will *also* get different modules, because the minimal polynomial of a matrix can depend on the field being used).

It's not really fair to say "we substitute $A$ for $T$". Polynomial expressions in matrices are a bit different that polynomial expressions in field elements (unless the matrices are 1x1).
Hi Deveno,

Just a quick question ...

You write:

"... ... ... ... However, in the expression:

$f(A) = If_0 + Af_1 + \cdots + A^rf_r$ (here, we write the $f_j$ on the right, since we are viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module) ... ... ... ...
... ... why exactly are we viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module ... aren't Berrrick and Keating viewing $$\mathcal{K}^n$$ as a $$\mathcal{K} [T]$$ module ... ?

Can you clarify ... ?

Peter
 
I don't know "why", but Berrick and Keating write:

"...It is convenient to view $\mathcal{K}^n$ as a right $\mathcal{K}$-space...", that is, a right $\mathcal{K}$-module.

The only difference here is which side we write the scalar multiplication on, which for all practical purposes makes no difference, since the "scalar matrices":

$\alpha I$ for $\alpha \in \mathcal{K}$

commute with all other $n \times n$ matrices (in fact, they form the *center* of the ring of such matrices).
 
Deveno said:
I don't know "why", but Berrick and Keating write:

"...It is convenient to view $\mathcal{K}^n$ as a right $\mathcal{K}$-space...", that is, a right $\mathcal{K}$-module.

The only difference here is which side we write the scalar multiplication on, which for all practical purposes makes no difference, since the "scalar matrices":

$\alpha I$ for $\alpha \in \mathcal{K}$

commute with all other $n \times n$ matrices (in fact, they form the *center* of the ring of such matrices).
Thanks Deveno ...... ... yes, see that ... so we can regard $\mathcal{K}^n$ as a right $\mathcal{K}$-module or we can regard $\mathcal{K}^n$ as a right $$\mathcal{K} [T] $$-module ... ... is that right?

Peter
 
Deveno said:
Given a polynomial in $\mathcal{K}[T]$, say:

$f(T) = f_0 + f_1T + \cdots + f_rT^r$

$T$ is an *indeterminate*, and it is possible to have such $f$ of arbitrarily high degree.

However, in the expression:

$f(A) = If_0 + Af_1 + \cdots + A^rf_r$ (here, we write the $f_j$ on the right, since we are viewing $\mathcal{K}^n$ as a *right* $\mathcal{K}$-module)

it turns out that (for a field $\mathcal{K}$) that the matrix $A$ is actually *algebraic* over $\mathcal{K}$, so that:

$\mathcal{K}[A]$ is a *quotient* of $\mathcal{K}[T]$.

As you may recall, when one has a ring-homomorphism:

$\phi:R \to S$, and an $S$-module $M$, one can turn $M$ into an $R$-module like so:

$m\cdot r = m \cdot \phi(r)$ (the RHS is the right $S$-action).

The homomorphism here is:

$\phi: \mathcal{K}[T] \to \mathcal{K}[A]$,

and since $A \in \text{Hom}_{\mathcal{K}}(\mathcal{K}^n,\mathcal{K}^n)$, we have a natural action of $\mathcal{K}[A]$ on $\mathcal{K}^n$ defined by:

$x\cdot f(A) = (f(A))(x)$.

We then set $x\cdot f(T) = x\cdot \phi(f(T))$ (note $\phi(f(T))$ may have much lower degree than $f$, because if we have $m(A) = 0$, and:

$f(T) =q(T)m(T) + r(T)$ with $\text{deg }r < \text{deg }m$, it follows that:

$f(A) = r(A)$).

What happens, in actual practice, is that one determines the minimal polynomial of $A$ (if $A$ has $n$ distinct eigenvalues this will be the same as the characteristic polynomial $\det(IT - A)$). Knowing the degree of this allows us to choose a *basis* (over $\mathcal{K}$) for $\mathcal{K}[A]$which means we only have to compute a finite number of powers of $A$ to know the action of *any* polynomial $f[T]$ upon a vector $x$.

Note that we get various $\mathcal{K}[T]$-modules this way, depending on *which* matrix $A$ we use. So this tells us more about $A$ than it does about the space $\mathcal{K}^n$ or the polynomial ring $\mathcal{K}[T]$ (although, depending on which field $\mathcal{K}$ we use, we will *also* get different modules, because the minimal polynomial of a matrix can depend on the field being used).

It's not really fair to say "we substitute $A$ for $T$". Polynomial expressions in matrices are a bit different that polynomial expressions in field elements (unless the matrices are 1x1).
Hi Deveno,

I need some further help ...

You write:

"... ... As you may recall, when one has a ring-homomorphism:

$\phi:R \to S$, and an $S$-module $M$, one can turn $M$ into an $R$-module like so:

$m\cdot r = m \cdot \phi(r)$ (the RHS is the right $S$-action). ... ... I am having trouble understanding what is happening here ... hope you can clarify for me ...

If $m\cdot r = m \cdot \phi(r)$ then presumably $$r = \phi (r)$$ ... (I think ... ... )

BUT ... if that is true, then $$R$$ must be embedded in $$S$$ ... but this may not be the case for some S ...

Can you explain what is going on ...

Peter*** EDIT ***

Maybe you are saying ... ... Define the action of R on M by m \cdot \phi(r) ...

Is that right ... ?

Then we would have to prove that the action satsfies the relevant 'axioms' for an action ... but, presumably this is straightforward ...
 
Last edited:
Yes!

Here is a way to "keep it straight":

Clearly, $\Bbb Z$ is an abelian group which we can define a (right) $\Bbb Z$-action on via right-multiplication (turning it into a ring).

But we cannot, in general, define a right $\Bbb Z_n$-action thereby, for example with $n= 4$:

$k \cdot ([2]_4 + [2]_4) = k \cdot [0]_4 = 0 \neq k\cdot [2]_4+ k\cdot [2]_4 = 2(k\cdot [2]_4)$.

HOWEVER, if, given a $\Bbb Z_n$-module, we can certainly turn it into a $\Bbb Z$-action by:

$m \cdot k = m \cdot [k]_n$.

In fact, this is precisely the way finite abelian groups are turned into $\Bbb Z$-modules.

So while the homomorphism goes FROM $R$ TO $S$, the induced module action goes from $\mathbf{Mod}-S$ (the category of right $S$-modules) to $\mathbf{Mod}-R$.
 
Back
Top