Proving isomorphisms [Linear Algebra]

  • Thread starter iJake
  • Start date
  • #1
41
0

Homework Statement


a) Let ##D_n(\mathbb{k}) = \{A \in M_n(\mathbb{k}) : a_{ij} = 0 \iff i \neq j\}##
Prove that ## D_n(\mathbb{k}) \cong \mathbb{k}^n ##

b) Prove that ##\mathbb{k}[X]_n \cong \mathbb{k}^{n+1}##

I have one other exercise, but I would like to resolve it on my own. However, an unfamiliar bit of notation appears that I would like clarification on: what is ##\mathbb{k}^{\{x\}}##?


The attempt at a solution

I am unfamiliar with the notation used to prove isomorphism between two mathematical objects. I can see, for example, that the group of the diagonal matrices is a vector space, as is ##\mathbb{k}^n##, and that therefore there should exist a linear transformation that maps the elements of one to the other.

Would this look something like:

##\phi : D_n(\mathbb{k}) \rightarrow \mathbb{k}^n...## but what would the map of this linear transformation look like? Am I mapping ##\{A \in M_n(\mathbb{k}) : a_{ij} = 0 \iff i \neq j\}## to ##\{x, y, \ldots , n\} \in \mathbb{k}^n## or something similar?

Thanks for any help.
 

Answers and Replies

  • #2
14,345
11,660
To prove ##V \cong W## are isomorphic vector spaces, you need to
  1. establish a mapping ##\varphi \, : \, V \longrightarrow W##
  2. show that ##\varphi## is linear
  3. show that ##\varphi## is injective, i.e. ##\varphi(u)=\varphi(v) \Longrightarrow u=v##
  4. show that ##\varphi## is surjective, i.e. for all ##w\in W## there is a ##v\in V## such that ##\varphi(v)=w##
As for the first step, what would be a natural map from diagonal matrices with ##n## numbers to an ##n-##tuple?
 
  • #3
34,667
6,379
Would this look something like: ##\phi : D_n(\mathbb{k}) \rightarrow \mathbb{k}^n...## but what would the map of this linear transformation look like?
How many nonzero entries are there in the diagonal matrix? How many elements are there in the vector?
 
  • #4
41
0
Thanks to both of you! So I've put together this for the first exercise then.

1. ##\varphi : D_n(\mathbb{k}) \rightarrow \mathbb{k}^n, \varphi (a_{11}, a_{22}, \ldots , a_{ii}) = (v_1, v_2, \ldots , v_i)##
2.
##\varphi (a_{11} + b_{11}) = \varphi (a_{11}) + \varphi (b_{11}) = (v_1) + (w_1) = (v_1 + w_1) = \varphi (a_{11} + b_{11})##
##\varphi (c \cdot a_{11}) = (c \cdot v_1) = c \cdot (v_1) = c \cdot \varphi (a_{11})##
3. In my notes I've read that a linear transformation ##T## is injective iff ##Ker(T) = \{0\}##.
Thus,
##Ker(\varphi) = \{(a_{11}, a_{22}, \ldots , a_{ii} \in D_n(\mathbb{k}) : (v_1, v_2, \ldots , v_i) = 0 \} \rightarrow (a_{ii} = 0) \forall a \in D_n(\mathbb{k})##
4. When I define ##\varphi## as the function that maps the diagonal matrices to the ##n##-tuples, am I not also showing that it is surjective?

Assuming I've proceeded more or less correctly through this problem, is it correct to say that in my second case, the polynomials of the field ##\mathbb{k}## up to grade ##n## map to the ##n+1##-tuples where ##a_0## maps to ##v_1## up to ##a_nx_n## which maps to ##v_{n+1}##?

Thanks as always.
 
Last edited:
  • #5
14,345
11,660
Thanks to both of you! So I've put together this for the first exercise then.

1. ##\varphi : D_n(\mathbb{k}) \rightarrow \mathbb{k}^n, \varphi (a_{11}, a_{22}, \ldots , a_{ii}) = (v_1, v_2, \ldots , v_i)##
... where ##v_i= a_{ii}##.
2.
##\varphi (a_{11} + b_{11}) = \varphi (a_{11}) + \varphi (b_{11}) = (v_1) + (w_1) = (v_1 + w_1) = \varphi (a_{11} + b_{11})##
##\varphi (c \cdot a_{11}) = (c \cdot v_1) = c \cdot (v_1) = c \cdot \varphi (a_{11})##
3. In my notes I've read that a linear transformation ##T## is injective iff ##Ker(T) = \{0\}##.
This is equivalent in case of linear mappings. An arbitrary, i.e. not necessarily linear function is injective iff ##T(x)=T(y) \Longrightarrow x=y## holds. Can you show the equivalence to your definition in case ##T## is linear?
Thus,
##Ker(\varphi) = \{(a_{11}, a_{22}, \ldots , a_{ii} \in D_n(\mathbb{k}) : (v_1, v_2, \ldots , v_i) = 0 \} \rightarrow (a_{ii} = 0) \forall a \in D_n(\mathbb{k})##
4. When I define ##\varphi## as the function that maps the diagonal matrices to the ##n##-tuples, am I not also showing that it is surjective?
In principle, yes. Formally no. You have given an element ##\mathbf{v}=(v_1,\ldots,v_n)\in \mathbb{k}^n## and then must show that there is a diagonal matrix, which is mapped to ##\mathbf{v}## via ##\varphi##. Of course, it is pretty obvious which diagonal matrix will do.
Assuming I've proceeded more or less correctly through this problem,...
You did. Except that you should use ##v_i=a_{ii}## instead, because you haven't defined the ##v_i##. It isn't necessary until part 4, where you start with the ##v_i## and use the equation in the other direction: ##a_{ii}:=v_i## to define the matrix.
... is it correct to say that in my second case, the polynomials of the field \mathbb{k} up to grade ##n## map to the ##n+1##-tuples where ##a_0## maps to ##v-1## up to ##a_n## which maps to ##v_{n+1}##?
That's the idea, but again, given a polynomial ##a_0+a_1x+\ldots +a_nx^n## you don't need any ##v_i## until part 4 for the other direction, just map the polynomial to ##(a_0,\ldots ,a_n)##.
 
  • #6
41
0
Would the demonstration be something like...

Suppose ##\varphi## is injective. In this case if ##\varphi(a_1, a_2, \ldots , a_{ii}) = 0_{\mathbb{k}^n}##, as ##0_{\mathbb{k}^n} = \varphi(0_{D_n(\mathbb{k})})##, we can establish the equality ##\varphi(a_1, a_2, \ldots , a_{ii}) = \varphi(0_{D_n(\mathbb{k})})##, from which we deduce that ##(a_1, a_2, \ldots , a_{ii}) = 0_{D_n(\mathbb{k})}##. Thus if ##\{(a_1, a_2, \ldots , a_{ii}) \in D_n(\mathbb{k})\}## verifies that ##\varphi(a_1, a_2, \ldots , a_{ii}) = 0_{\mathbb{k}^n}##, we have that ##Ker(\varphi) = \{0_{D_n(\mathbb{k})}\}##

Reciprocally, if ##\varphi(a_1, a_2, \ldots , a_{ii}) = \varphi(a_1', a_2', \ldots , a_{ii}')##, then ##\varphi((a_1, a_2, \ldots , a_{ii}) - (a_1', a_2', \ldots , a_{ii}')) = 0_{\mathbb{k}^n}## and if the linear map has a kernel of 0 we conclude that ##(a_1, a_2, \ldots , a_{ii}) - (a_1', a_2', \ldots , a_{ii}') = 0_{D_n(\mathbb{k})}##, in other words that ##(a_1, a_2, \ldots , a_{ii}) = (a_1', a_2', \ldots , a_{ii}')##.

It seems kind of cumbersome (it certainly was to type out), but I think that will work.

When you say I should use ##v_i = a_{ii}## do you mean I should define the whole thing the other way around?

Finally, for surjectivity, I wrote: ##\varphi## is surjective because ##\forall v \in \mathbb{k}^n : v = (v_1, v_2, \ldots, v_i) \exists (a_1, a_2, \ldots , a_{ii}) \in D_n(\mathbb{k}) : \varphi(a_1, a_2, \ldots , a_{ii}) = (v_1, v_2, \ldots, v_i) ##, that is, ##\varphi(D_n(\mathbb{k})) = Im(\varphi) \in \mathbb{k}^n##

EDIT:

Also, as I feel I should be able to finish the exercise once I've cleared up these points, I would ask again for clarification about what the notation ##\mathbb{k}^{\{x\}}## denotes. Thank you!
 
  • #7
14,345
11,660
Would the demonstration be something like...

Suppose ##\varphi## is injective. In this case if ##\varphi(a_1, a_2, \ldots , a_{ii}) = 0_{\mathbb{k}^n}##, as ##0_{\mathbb{k}^n} = \varphi(0_{D_n(\mathbb{k})})##, we can establish the equality ##\varphi(a_1, a_2, \ldots , a_{ii}) = \varphi(0_{D_n(\mathbb{k})})##, from which we deduce that ##(a_1, a_2, \ldots , a_{ii}) = 0_{D_n(\mathbb{k})}##. Thus if ##\{(a_1, a_2, \ldots , a_{ii}) \in D_n(\mathbb{k})\}## verifies that ##\varphi(a_1, a_2, \ldots , a_{ii}) = 0_{\mathbb{k}^n}##, we have that ##Ker(\varphi) = \{0_{D_n(\mathbb{k})}\}##

Reciprocally, if ##\varphi(a_1, a_2, \ldots , a_{ii}) = \varphi(a_1', a_2', \ldots , a_{ii}')##, then ##\varphi((a_1, a_2, \ldots , a_{ii}) - (a_1', a_2', \ldots , a_{ii}')) = 0_{\mathbb{k}^n}## and if the linear map has a kernel of 0 we conclude that ##(a_1, a_2, \ldots , a_{ii}) - (a_1', a_2', \ldots , a_{ii}') = 0_{D_n(\mathbb{k})}##, in other words that ##(a_1, a_2, \ldots , a_{ii}) = (a_1', a_2', \ldots , a_{ii}')##.

It seems kind of cumbersome (it certainly was to type out), but I think that will work.
Yes, it does. It's correct, however, you made your life more complicated than necessary. You could have taken any linear function ##\varphi\, : \,V \longrightarrow W## and written
$$
\varphi(u)=\varphi(v) \Rightarrow \varphi(u)-\varphi(v)=\varphi(u-v)=0 \Rightarrow u-v \in \operatorname{ker}\varphi = \{0\} \Rightarrow u=v
$$
and for the other direction
$$
u \in \operatorname{ker}\varphi \Rightarrow \varphi(u)=0=\varphi(0) \Rightarrow u=0
$$
which is exactly the same what you have written, simply without those many components of the vectors.
When you say I should use ##v_i = a_{ii}## do you mean I should define the whole thing the other way around?
No, I meant that you haven't said what the ##v_i## actually are. You wrote ##\varphi(a_{11},\ldots, a_{nn})=(v_1,\ldots ,v_n)## but what are those elements? If you define a mapping / function / transformation, you'll have to say what it does. In this case it is
$$
\varphi \, : \,D_n(\mathbb{k}) \longrightarrow \mathbb{k}^n\, , \, \varphi (A) := (a_{11}, \ldots , a_{nn} )
$$
for a matrix ##A \in D_n(\mathbb{k})##. There is no need for ##v_i##, especially if undefined. For the surjective part, however, we have given an element ##(v_1,\ldots ,v_n) \in \mathbb{k}^n ## and there is no matrix. We construct one by ##A:=\operatorname{diag}(v_1,\ldots ,v_n)## and show that ##A \in D_n(\mathbb{k})## and ##\varphi(A)= (v_1,\ldots ,v_n)##. O.k., "show" is a bit over the top, since it is obvious in this case.

The formulation ##\operatorname{diag}(v_1,\ldots ,v_n)## is a short way to write diagonal matrices, you can as well write the entire matrix or define the matrix elements ##a_{ij}## by cases ##i=j## and ##i\neq j##.
Finally, for surjectivity, I wrote: ##\varphi## is surjective because ##\forall v \in \mathbb{k}^n : v = (v_1, v_2, \ldots, v_i) \exists (a_1, a_2, \ldots , a_{ii}) \in D_n(\mathbb{k}) : \varphi(a_1, a_2, \ldots , a_{ii}) = (v_1, v_2, \ldots, v_i) ##, that is, ##\varphi(D_n(\mathbb{k})) = Im(\varphi) \in \mathbb{k}^n##
Almost. ##\exists (a_1, a_2, \ldots , a_{ii}) \in D_n(\mathbb{k})## is misleading, that's what the ##\operatorname{diag}## is for. Otherwise you have written a vector and demanded it to be a matrix: The ##\operatorname{diag}(a_1, \ldots , a_{nn})## form solves this problem.

And again, we have the ##v_i## given and must find the ##a_i##, so writing directly ##a_i := v_i## defines them and they are found. It remains to show that this choice will do, but in this case, it's clear.

The last ##\in## should be an equality sign ##=##, because the point is, that we're not only in ##\mathbb{k}^n## which is the first thing to do when defining ##\varphi##, we also must reach all vectors in ##\mathbb{k}^n ##, so ##\varphi(D_n(\mathbb{k}))=\operatorname{im}\varphi = \mathbb{k}^n##.
EDIT:

Also, as I feel I should be able to finish the exercise once I've cleared up these points, I would ask again for clarification about what the notation ##\mathbb{k}^{\{x\}}## denotes. Thank you!
You're welcome, but I've never seen ##\mathbb{k}^{\{x\}}## before. I know ##\mathbb{k}^\times## which is sometimes used for ##\mathbb{k}-\{\,0\,\}## or ##\mathbb{k}^1=\mathbb{k}## or ##\mathbb{k}^\omega## if it is an infinite dimensional vector space, but I don't know an exponent in brackets.
 
  • #8
41
0
I'll reply to this post later today once I arrive home with my corrected proof that ##\varphi## is surjective, to make sure that my notation is what you were trying to describe.

As far as this unknown bit of notation, the exercise states that ##\mathbb{k}^{\{x\}} \cong \mathbb{k}##. Would that perhaps be coherent if the notation meant ##\mathbb{k} - \{0\}##?
 
  • #9
14,345
11,660
As far as this unknown bit of notation, the exercise states that ##\mathbb{k}^{\{x\}} \cong \mathbb{k}##. Would that perhaps be coherent if the notation meant ##\mathbb{k} - \{0\}##?
No, as ##\mathbb{k} - \{0\} \neq \mathbb{k}##. But ##\mathbb{k}^{\{x\}}## could mean all mappings from ##\{\,x\,\} \to \mathbb{k}##, so we have as many (non linear) functions as images for ##\{\,x\,\}## are available, and these are all elements of ##\mathbb{k}##. A bit more context would be helpful here. Are you sure it isn't ##\mathbb{k}^*## which denotes all linear mappings from ##\mathbb{k}## to itself: ##\{\,x \longmapsto \lambda \cdot x\,|\,\lambda \in \mathbb{k}\,\} \cong \mathbb{k}\,?##
 
  • #10
41
0
##\mathbb{k}[X]_n \cong \mathbb{k}^n##

##\varphi : \mathbb{k}[X]_n \rightarrow \mathbb{k}^n, \varphi(a_0 + a_1x + \ldots + a_nx^n) = (a_0, a_1, \ldots , a_n)##

I abbreviate the polynomial as ##a##.

##\varphi(a+b) = \varphi(a) + \varphi(b) = (a_0, a_1, \ldots , a_n) + (b_0, b_1, \ldots , b_n) = (a_0 + b_0, a_1 + b_1, \ldots , a_n + b_n) = \varphi(a+b)##
##\varphi (c \cdot a) = (c \cdot (a_0, a_1, \ldots , a_n)) = c \cdot (a_0, a_1, \ldots , a_n) = c \cdot \varphi(a)##

##Ker(\varphi) = \{(a_0 + a_1x + \ldots + a_nx^n) \in \mathbb{k}[X]_n : (a_0, a_1, \ldots, a_n) = 0\}##
##Ker(\varphi) = \{0\} \rightarrow a_i = 0 \forall a \in \mathbb{k}[X]_n##

##\forall v \in \mathbb{k}^n : v = (v_1, v_2, \ldots , v_{n+1}) ##
##\exists 'a'## (where ##a## is the aforementioned polynomial) ##\in \mathbb{k}[X]_n : \varphi(a_0 + a_1x + \ldots + a_nx^n) = (a_0, a_1, \ldots , a_n)## where ##a_i := v_{i+1} ##
##\forall a \in \mathbb{k}[X]_n, \forall v_{i} \in \mathbb{k}^n##
thus ##\varphi(\mathbb{k}[X]_n) = Im(\varphi) = \mathbb{k}^n##

I guess the notation is a little funky, for example I'm not sure if I can just state that polynomial exists in that way, and I'm not sure if I can assign ##a_i := v_{i+1}## as I have, but that's my initial attempt.

As for the mystery notation, ##\mathbb{k}^{\{x\}}## indeed indicates all mappings from ##\{x\} \rightarrow \mathbb{k}##. How would I go about writing the first step for this?

Thanks for your help as always.
 
  • #11
14,345
11,660
You put a lot of effort on layout and form, which is good, but if I read the details, they still reveal a bit of confusion. My corrections below maybe a bit nit-picking, but I think this is the real exercise, and less the results, which are more or less obvious.
##\mathbb{k}[X]_n \cong \mathbb{k}^n##
##\mathbb{k}^{n+1}##
##\varphi : \mathbb{k}[X]_n \rightarrow \mathbb{k}^n, \varphi(a_0 + a_1x + \ldots + a_nx^n) = (a_0, a_1, \ldots , a_n)##

I abbreviate the polynomial as ##a##.

##\varphi(a+b) = \varphi(a) + \varphi(b) ##
Done. That's what you want to show.
So the setup should be
\begin{align*}
\varphi(a+b) &= \varphi (a_0+a_1x+\ldots +a_nx^x+b_0+b_1+\ldots +b_nx^n)\\
&= \varphi ((a_0+b_0)+(a_1+b_1)x+\ldots +(a_n+b_n)x^n)\\
&= ((a_0+b_0),(a_1+b_1),\ldots ,(a_n+b_n))\\
&=(a_0,a_1,\ldots,a_n)+(b_0,b_1,\ldots,b_n)\\
&=\varphi(a)+\varphi(b)
\end{align*}
such that we can read it all in the correct order:
What is ##\varphi(a+b)\,?## Let's see: ##\varphi(a+b)= \ldots = \ldots =\varphi(a)+\varphi(b)##. Check!
##\ldots (a_0, a_1, \ldots , a_n) + (b_0, b_1, \ldots , b_n) = (a_0 + b_0, a_1 + b_1, \ldots , a_n + b_n) = \varphi(a+b)##
##\varphi (c \cdot a) = (c \cdot (a_0, a_1, \ldots , a_n)) = c \cdot (a_0, a_1, \ldots , a_n) = c \cdot \varphi(a)##
Better, but still half of the work is hidden behind the first equality sign. It looks a bit as if you only write ##\varphi(ca)=c\varphi(a)## which is to be shown, not started with. It is true, and easy to see, so again I would expect the exercise to be formally correct. You have a polynomial here, where is it? The whole thing works, because the scalar multiplication of polynomials is to multiply each term, and that this way to multiply is the same as in ##\mathbb{k}^{n+1}##. So both multiplications should be visible: c times polynomial and c times n-tuple. You have neither.
##Ker(\varphi) = \{(a_0 + a_1x + \ldots + a_nx^n) \in \mathbb{k}[X]_n : (a_0, a_1, \ldots, a_n) = 0\}##
##Ker(\varphi) = \{0\} \rightarrow a_i = 0 \forall a \in \mathbb{k}[X]_n##
Again a bit of order. The first line is correct. The second, too, just a bit confusing. Why didn't you continue with the first line?
##Ker(\varphi) = \{(a_0 + a_1x + \ldots + a_nx^n) \in \mathbb{k}[X]_n : (a_0, a_1, \ldots, a_n) = 0\}##
##= \{(a_0 + a_1x + \ldots + a_nx^n) \in \mathbb{k}[X]_n : a_0 = 0, a_1 = 0, \ldots , a_n = 0\}##
##= \{(a_0 + a_1x + \ldots + a_nx^n) \in \mathbb{k}[X]_n : a_0+ a_1x+ \ldots + a_nx^n = 0\}##
##= \{0\}##
##\forall v \in \mathbb{k}^n : v = (v_1, v_2, \ldots , v_{n+1}) ##
##\exists 'a'## (where ##a## is the aforementioned polynomial) ##\in \mathbb{k}[X]_n : \varphi(a_0 + a_1x + \ldots + a_nx^n) = (a_0, a_1, \ldots , a_n)## where ##a_i := v_{i+1} ##
##\forall a \in \mathbb{k}[X]_n, \forall v_{i} \in \mathbb{k}^n##
thus ##\varphi(\mathbb{k}[X]_n) = Im(\varphi) = \mathbb{k}^n##

I guess the notation is a little funky, for example I'm not sure if I can just state that polynomial exists in that way, and I'm not sure if I can assign ##a_i := v_{i+1}## as I have, but that's my initial attempt.
Funky is a nice description, although I might mean it a bit differently. The structure is as follows:
Given any ##v \in \mathbb{k}^{n+1}##(sic!)## : v = (v_1, v_2, \ldots , v_{n+1}) ##, arbitrary, but fixed. The forall quantor is hidden in any, because we only treat one example and not all at once. Since we do not restrict the choice of ##v##, any guarantees us that the following holds for all ##v##. The task is to find a polynomial ##a=a_0+a_1x+\ldots +a_nx^n## with (to be shown!) ##\varphi(a)=v##.
I can assign ##a_i := v_{i+1}##
for ##i=0,\ldots ,n## and so
##\varphi(a)=\varphi(a_0+a_1x+\ldots +a_nx^n)=\varphi(v_1+v_2x+\ldots +v_nx^{n-1}+v_{n+1}x^n)=(v_1, v_2, \ldots , v_{n+1}) =v##
As for the mystery notation, ##\mathbb{k}^{\{x\}}## indeed indicates all mappings from ##\{x\} \rightarrow \mathbb{k}##. How would I go about writing the first step for this?

Thanks for your help as always.
Of course these correction of mine are more detailed than necessary, but I wanted you to see what exactly is going on step by step.
 
Last edited:

Related Threads on Proving isomorphisms [Linear Algebra]

  • Last Post
Replies
5
Views
6K
  • Last Post
Replies
5
Views
2K
Replies
3
Views
4K
Replies
19
Views
2K
Replies
4
Views
669
Replies
2
Views
495
  • Last Post
Replies
4
Views
1K
  • Last Post
Replies
10
Views
2K
Top