Solving Linear Mapping Problems with Matrix $M$ in $\mathbb K^2$

In summary, this conversation discusses finding a matrix $M$ for a linear mapping $f$ and proving the existence of an element $x$ such that a set of linearly independent vectors can be formed. The conversation also considers different approaches to solving the problem, including finding a suitable choice for $x$ and using the properties of linear mappings.
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! :eek:

Let $\mathbb{K}$ be a field.

  1. Find a matrix $M\in \mathbb{K}^{2\times 2}$ such that for the linear mapping $f:\mathbb{K}^2\rightarrow \mathbb{K}^2, x\mapsto Mx$ it holds that $f\neq 0$ and $f^2:=f\circ f=0$.
  2. Let $V$ be a $\mathbb{K}$-vector space and $\psi:V\rightarrow V$ be a linear mapping, with $\psi^k\neq 0$ and $\psi^{k+1}=0$ for some $k>0$. Show that there is an element $x\in V$ such that the set $\{x, \psi (x), \ldots , \psi^k(x)\}$ is linearly independent.
I have done the following:

  1. Since $f\neq 0$ it holds that $M$ is not the zero matrix.
    We have that $f^2(x):=f(f(x))=f(Mx)=MMx=M^2x$.
    So, we have to find a matrix $M$ such that $M^2$ is the zero matrix.
    Let $M=\begin{pmatrix}m_1&m_2 \\m_3&m_4\end{pmatrix}$.
    Then we have the following:
    $$M^2=0 \Rightarrow \begin{pmatrix}m_1&m_2 \\m_3&m_4\end{pmatrix}\begin{pmatrix}m_1&m_2 \\m_3&m_4\end{pmatrix}=\begin{pmatrix}0&0 \\0&0\end{pmatrix} \Rightarrow\begin{pmatrix}m_1^2+m_2m_3&m_1m_2+m_2m_4 \\m_3m_1+m_4m_3&m_3m_2+m_4^2\end{pmatrix}=\begin{pmatrix}0&0 \\0&0\end{pmatrix}$$
    So, we are looking for a solution for the following system:
    $$m_1^2+m_2m_3=0 \\ m_1m_2+m_2m_4=0 \Rightarrow m_2(m_1+m_4)=0\\m_3m_1+m_4m_3=0\Rightarrow m_3(m_1+m_4)=0 \\ m_3m_2+m_4^2=0$$
    When we take from the second and third equation that $m_1+m_4=0 \Rightarrow m_1=-m_4$, let $m_1=-m_4=1$, then we get from the first and fourth equation that $1+m_2m_3=0\Rightarrow n_2m_3=-1$. Let $m_2=-m_3=1$.
    So, we get the matrix: $$M=\begin{pmatrix}1&1 \\-1&-1\end{pmatrix}$$

    Is this correct? (Wondering)
  2. Let $x\in V$. Suppose that the set $\{x, \psi (x), \ldots , \psi^k(x)\}$ is linearly dependent, i.e., there are $c_i$'s not all zero such that $c_0x+ c_1\psi (x)+ \ldots + c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$.
    $$\psi (c_0x+ c_1\psi (x)+ \ldots + c_{k-1}\psi^{k-1}(x)+ c_k\psi^k(x))=\psi (0)=0 \\ (\text{ it holds that } \psi (0)=0 \text{ since it is a linear mapping, or not? }) \\ \Rightarrow c_0\psi (x)+ c_1\psi^2 (x)+ \ldots + c_{k-1}\psi^k(x)+c_k\psi^{k+1}(x)=0 \\ \Rightarrow c_0\psi (x)+ c_1\psi^2 (x)+ \ldots + c_{k-1}\psi^k(x)=0$$

    Is this correct so far? How could we continue? (Wondering)
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
mathmari said:
2. Let $x\in V$. Suppose that the set $\{x, \psi (x), \ldots , \psi^k(x)\}$ is linearly dependent, i.e., there are $c_i$'s not all zero such that $c_0x+ c_1\psi (x)+ \ldots + c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$.
$$\psi (c_0x+ c_1\psi (x)+ \ldots + c_{k-1}\psi^{k-1}(x)+ c_k\psi^k(x))=\psi (0)=0 \\ (\text{ it holds that } \psi (0)=0 \text{ since it is a linear mapping, or not? }) \\ \Rightarrow c_0\psi (x)+ c_1\psi^2 (x)+ \ldots + c_{k-1}\psi^k(x)+c_k\psi^{k+1}(x)=0 \\ \Rightarrow c_0\psi (x)+ c_1\psi^2 (x)+ \ldots + c_{k-1}\psi^k(x)=0$$
$$\psi (c_0x+ c_1\psi (x)+ \ldots + c_{k-1}\psi^{k-1}(x)+ c_k\psi^k(x))=\psi (0)=0 \\ \Rightarrow c_0\psi (x)+ c_1\psi^2 (x)+ \ldots + c_{k-1}\psi^k(x)+c_k\psi^{k+1}(x)=0 \\ \Rightarrow c_0\psi (x)+ c_1\psi^2 (x)+ \ldots + c_{k-1}\psi^k(x)=0 \\ \Rightarrow \psi (c_0\psi (x)+ c_1\psi^2 (x)+ \ldots + c_{k-2}\psi^{k-1}(x)+c_{k-1}\psi^k(x))=\psi (0) \\ \Rightarrow c_0\psi^2 (x)+ c_1\psi^3 (x)+ \ldots + c_{k-2}\psi^k(x)+c_{k-1}\psi^{k+1}(x)=0 \\ \Rightarrow c_0\psi^2 (x)+ c_1\psi^3 (x)+ \ldots + c_{k-2}\psi^k(x)=0 \\ \ldots \\ \Rightarrow c_0\psi^k(x)+c_1\psi^{k+1}(x)=0 \\ \Rightarrow c_0\psi^k(x)=0$$ Since $\psi^k(x)\neq 0$ we conclude that $c_0$.

Do we do the same again $k$ times to cocnlude that $c_i=0$ for all $i=0, \ldots , k$ ? (Wondering)

Or do we show it in an other way? (Wondering)
 
  • #3
Hey mathmari! (Smile)

mathmari said:
  1. Find a matrix $M\in \mathbb{K}^{2\times 2}$ such that for the linear mapping $f:\mathbb{K}^2\rightarrow \mathbb{K}^2, x\mapsto Mx$ it holds that $f\neq 0$ and $f^2:=f\circ f=0$.
  2. Let $V$ be a $\mathbb{K}$-vector space and $\psi:V\rightarrow V$ be a linear mapping, with $\psi^k\neq 0$ and $\psi^{k+1}=0$ for some $k>0$. Show that there is an element $x\in V$ such that the set $\{x, \psi (x), \ldots , \psi^k(x)\}$ is linearly independent.

...

$$M=\begin{pmatrix}1&1 \\-1&-1\end{pmatrix}$$
Is this correct? (Wondering)

Yep. (Nod)

Alternatively, we can observe that $M$ must be similar to a Jordan normal form.
Suppose for $v\ne 0$ we have $Mv=\lambda v$. Then $M^2v=\lambda^2 v$. Therefore $\lambda = 0$.
Since $M \ne 0$, it follows that $M$ is similar to the Jordan normal form:
$$M \sim \begin{pmatrix}0&1 \\ 0&0\end{pmatrix}$$

Your matrix $M$ is indeed similar to that form. (Nerd)

...
Do we do the same again $k$ times to cocnlude that $c_i=0$ for all $i=0, \ldots , k$ ? (Wondering)

Yep. (Nod)

Now let's add the initial choice for $x$ such that $\psi^k(x) \ne 0$. Such an $x$ must exist, since $\psi^k \ne 0$.
Then it follows that we have a contradiction.
Therefore, with that choice of $x$, it follows that $x, \psi(x), ..., \psi^k(x)$ are linearly independent. (Thinking)
 
  • #4
I like Serena said:
Yep. (Nod)

Now let's add the initial choice for $x$ such that $\psi^k(x) \ne 0$. Such an $x$ must exist, since $\psi^k \ne 0$.
Then it follows that we have a contradiction.
Therefore, with that choice of $x$, it follows that $x, \psi(x), ..., \psi^k(x)$ are linearly independent. (Thinking)

Could we formulate it also as follows?

Let $x\in V$ such that $\psi^k(x) \ne 0$. The set $\{x, \psi (x), \ldots , \psi^k\}$ ist linearly independent if and only if it holds that $c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$ for $c_0=c_1=\ldots =c_{k-1}=c_k=0$.

Since $\psi$ is a linear mapping we have that $\psi (0)=0$.

Since $\psi^{k+1}(x)=0$ and $\psi (0)=0$ we have that $\psi^n(x)=0, \ \forall n\geq k+1$.

We have the following:
$$\psi^k (c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x))=\psi^k (0) \\ \Rightarrow c_0\psi^k(x)+c_1\psi^{k+1} (x)+\ldots +c_{k-1}\psi^{2k-1}(x)+c_k\psi^{2k}(x)=0 \\ \Rightarrow c_0\psi^k(x)=0 \\ \Rightarrow c_0=0$$

So, we get $c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$.

We have that
$$\psi^{k-1} (c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x))=\psi^{k-1} (0) \\ \Rightarrow c_1\psi^k (x)+\ldots +c_{k-1}\psi^{2k-2}(x)+c_k\psi^{2k-1}(x)=0 \\ \Rightarrow c_1\psi^k(x)=0 \\ \Rightarrow c_1=0$$

So, we get $c_2\psi^2 (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$.

When we repeat this $(k-2)$-times (or not?) we get that $c_i=0, \ \forall i=1, \ldots k$. Is everything correct? Could I improve something? (Wondering)
 
  • #5
It looks all correct to me. (Nod)

Formally, I think we're supposed to set up a proof by induction though... (Thinking)
 
  • #6
I like Serena said:
Formally, I think we're supposed to set up a proof by induction though... (Thinking)

Ah ok. So, is the induction as follows? (Wondering)

Let $x\in V$ such that $\psi^k(x) \ne 0$. We wil show that the set $\{x, \psi (x), \ldots , \psi^k\}$ ist linearly independent, i.e., that it holds it when $c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$ then $c_i=0, \forall i$.

Base case: For i=0 it holds the following:

Since $\psi$ is a linear mapping we have that $\psi (0)=0$.
$$\psi^k (c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x))=\psi^k (0) \\ \Rightarrow c_0\psi^k(x)+c_1\psi^{k+1} (x)+\ldots +c_{k-1}\psi^{2k-1}(x)+c_k\psi^{2k}(x)=0 \\ \Rightarrow c_0\psi^k(x)=0 \\ \Rightarrow c_0=0 \ \ \checkmark$$ Inductive hypothesis: We suppose that it holds that for $i\leq m$ : $$c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0 \Rightarrow c_i=0, \forall 1\leq i\leq m$$ Inductive step: We want to shw that it holds for $i=m+1$:
From the inductive hypothesis we get that $c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0 \Rightarrow c_i=0, \forall 1\leq i\leq m$. So $c_{m+1}\psi^{m+1}(x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$.
Then we have the following:
$$\psi^{k-m-1} (c_{m+1}\psi^{m+1}(x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x))=\psi^{k-1} (0) \\ \Rightarrow c_{m+1}\psi^k (x)+\ldots +c_{k-1}\psi^{2k-m-2}(x)+c_k\psi^{2k-m-1}(x)=0 \\ \Rightarrow c_{m+1}\psi^k(x)=0 \\ \Rightarrow c_{m+1}=0$$
So, we have that when $c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$ then $c_i=0, \forall i$, i.e., the set $\{x, \psi (x), \ldots , \psi^k\}$ ist linearly independent. This is the only way to show that the set is inearly independent, right? (Wondering)
 
  • #7
mathmari said:
Ah ok. So, is the induction as follows? (Wondering)

Let $x\in V$ such that $\psi^k(x) \ne 0$. We wil show that the set $\{x, \psi (x), \ldots , \psi^k\}$ ist linearly independent, i.e., that it holds it when $c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$ then $c_i=0, \forall i$.

Base case: For i=0 it holds the following:
"i" is not the index here, k is.

Since $\psi$ is a linear mapping we have that $\psi (0)=0$.
$$\psi^k (c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x))=\psi^k (0) \\ \Rightarrow c_0\psi^k(x)+c_1\psi^{k+1} (x)+\ldots +c_{k-1}\psi^{2k-1}(x)+c_k\psi^{2k}(x)=0 \\ \Rightarrow c_0\psi^k(x)=0 \\ \Rightarrow c_0=0 \ \ \checkmark$$ Inductive hypothesis: We suppose that it holds that for $i\leq m$ : $$c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0 \Rightarrow c_i=0, \forall 1\leq i\leq m$$ Inductive step: We want to shw that it holds for $i=m+1$:
From the inductive hypothesis we get that $c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0 \Rightarrow c_i=0, \forall 1\leq i\leq m$. So $c_{m+1}\psi^{m+1}(x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$.
Then we have the following:
$$\psi^{k-m-1} (c_{m+1}\psi^{m+1}(x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x))=\psi^{k-1} (0) \\ \Rightarrow c_{m+1}\psi^k (x)+\ldots +c_{k-1}\psi^{2k-m-2}(x)+c_k\psi^{2k-m-1}(x)=0 \\ \Rightarrow c_{m+1}\psi^k(x)=0 \\ \Rightarrow c_{m+1}=0$$
So, we have that when $c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0$ then $c_i=0, \forall i$, i.e., the set $\{x, \psi (x), \ldots , \psi^k\}$ ist linearly independent. This is the only way to show that the set is inearly independent, right? (Wondering)
 
  • #8
HallsofIvy said:
"i" is not the index here, k is.

So, do we have to show that for each $k\geq 0$ it holds that $$c_0x+c_1\psi (x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^k(x)=0\Rightarrow c_0=\ldots =c_k=0$$ ? (Wondering)

Base case: For $k=0$ we have that $\psi (x)\neq 0$ and then $$c_0x=0\Rightarrow \psi(c_0x)=\psi(0)\Rightarrow c_0\psi(x)=0\Rightarrow c_0\ \checkmark$$

Inductive hypothesis: We suppose that it holds for $k=n$:
$$c_0x+c_1\psi (x)+\ldots +c_{n-1}\psi^{n-1}(x)+c_n\psi^n(x)=0\Rightarrow c_0=\ldots =c_n=0$$

Inductive step: We want to show that it holds for $k=n+1$:
$$c_0x+c_1\psi (x)+\ldots +c_{n}\psi^{n}(x)+c_{n+1}\psi^{n+1}(x)=0\Rightarrow c_0=\ldots =c_{n+1}=0$$
What can we do here? How can we use the inductive hypothesis? We could apply it only if we would have $c_0x+c_1\psi (x)+\ldots +c_{n-1}\psi^{n-1}(x)+c_n\psi^n(x)=0$, or not? (Wondering)
 
  • #9
mathmari said:
Inductive step: We want to show that it holds for $k=n+1$:
$$c_0x+c_1\psi (x)+\ldots +c_{n}\psi^{n}(x)+c_{n+1}\psi^{n+1}(x)=0\Rightarrow c_0=\ldots =c_{n+1}=0$$

Tp apply here the inductive hypothesis, we have to show first that $c_{n+1}=0$, or not? But how? When we apply any map $\psi^i$ the term $c_{n+1}\psi^{n+1}(x)$ will get zero. (Wondering)
 
  • #10
I think we should proof for an arbitrary $k$ that $c_0 = 0, ..., c_i=0, ..., c_k=0$.

Then the base case is to proof that $c_0 = 0$.
And the induction step is to assume that $c_0=...=c_i=0$, and to proof that $c_{i+1}=0$. (Thinking)
 
  • #11
I tried now the following:

Let $x\in V$ such that $\psi^k(x) \ne 0$. We will show that the set $\{\psi^{k-n}(x),\psi^{k-(n-1)}(x),\dots,\psi^{k}(x)\}$ is linearly independent for $0\leq n\leq k$, i.e., that it holds it when $c_0\psi^{k-n}(x)+c_1 \psi^{k-(n-1)}(x)+\ldots +c_k\psi^{k}(x)=0$ then $c_0=\ldots c_k=0$. Base case: For $n=0$ we have the set $\{\psi^k(x)\}$. A non-zero element set is linealry independent.

Inductive hypothesis: We suppose that it holds for $n=i$:
The set $\{\psi^{k-i}(x),\psi^{k-(i-1)}(x),\dots,\psi^{k}(x)\}$ is linearly independent, i.e., $$c_0\psi^{k-i}x+\ldots +c_k\psi^k(x)=0\Rightarrow c_0=\ldots =c_k=0$$

Inductive step: We want to show that it holds for $n=i+1$:
$$c_0\psi^{k-(i+1)}(x)+c_1 \psi^{k-i}(x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^{k}(x)=0 \\ \Rightarrow \psi (c_0\psi^{k-(i+1)}(x)+c_1 \psi^{k-i}(x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^{k}(x))=\psi (0) \\ \Rightarrow c_0\psi^{k-i}(x)+c_1 \psi^{k-(i-1)}(x)+\ldots +c_{k-1}\psi^{k}(x)+c_k\psi^{k+1}(x)=0 \\ \Rightarrow c_0\psi^{k-i}(x)+c_1 \psi^{k-(i-1)}(x)+\ldots +c_{k-1}\psi^{k}(x)=0$$ then by the inductive hypothesis we get that the set $\{\psi^{k-i}(x),\psi^{k-(i-1)}(x),\dots,\psi^{k}(x)\}$ is linearly independent, and so $c_0=\ldots c_{k-1}=0$

So, from $c_0\psi^{k-(i+1)}(x)+c_1 \psi^{k-i}(x)+\ldots +c_{k-1}\psi^{k-1}(x)+c_k\psi^{k}(x)=0$ we get that $c_k\psi^{k}(x)=0$ and since $\psi^{k}(x)\neq 0$, it follows that $c_k=0$.

Therefore, $c_0=c_1=\ldots =c_{k-1}=c_k=0$.

So, the set $\{\psi^{k-n}(x),\psi^{k-(n-1)}(x),\dots,\psi^{k}(x)\}$ is linearly independent for each $n$.

For $n=k$ we get that the set $\{x,\psi(x),\dots,\psi^{k}(x)\}$ is linearly independent. Is this correct? (Wondering)
 

1. What is a linear mapping problem?

A linear mapping problem refers to a mathematical problem that involves transforming a set of points or vectors in one space (such as $\mathbb K^2$) to another space using a linear transformation. This is typically represented using a matrix $M$ and is used in various fields, such as physics, engineering, and computer graphics.

2. What is a matrix $M$?

A matrix $M$ is a rectangular array of numbers or symbols that can be used to represent a linear transformation. In the context of solving linear mapping problems in $\mathbb K^2$, $M$ is typically a 2x2 matrix, with each entry representing a coefficient that determines how each coordinate in $\mathbb K^2$ is transformed.

3. How do you solve a linear mapping problem using matrix $M$ in $\mathbb K^2$?

To solve a linear mapping problem using matrix $M$ in $\mathbb K^2$, you would first need to represent the coordinates of the points or vectors in $\mathbb K^2$ as column matrices. Then, you would multiply each column matrix by the matrix $M$ to obtain the transformed coordinates. This can be done using basic matrix operations such as addition, subtraction, and multiplication.

4. What is the significance of $\mathbb K^2$ in linear mapping problems?

$\mathbb K^2$ is used to denote the two-dimensional coordinate space, which is commonly referred to as the Cartesian plane. In linear mapping problems, it represents the original space of the points or vectors that need to be transformed. The use of $\mathbb K^2$ allows for a clear understanding of the dimensions and coordinates involved in the problem.

5. Can matrix $M$ be used for solving linear mapping problems in spaces other than $\mathbb K^2$?

Yes, matrix $M$ can be used for solving linear mapping problems in spaces other than $\mathbb K^2$. The size and entries of the matrix will vary depending on the dimensions and coordinates of the space being transformed. However, the basic principles and operations involved in solving the problem using matrix $M$ remain the same.

Similar threads

  • Linear and Abstract Algebra
Replies
19
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
34
Views
2K
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
17
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
990
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
891
  • Linear and Abstract Algebra
Replies
15
Views
975
  • Linear and Abstract Algebra
Replies
12
Views
1K
Back
Top