MHB Show that the subspace U is ϕ^z-invariant

  • Thread starter Thread starter mathmari
  • Start date Start date
  • Tags Tags
    Subspace
mathmari
Gold Member
MHB
Messages
4,984
Reaction score
7
Hey! 😊

Let $\mathbb{K}$ be a field and let $V$ be a $\mathbb{K}$-vector space.

Let $\phi,\psi:V\rightarrow V$ be linear maps, such that $\phi\circ\psi=\psi\circ\phi$.

I have shown using induction that if $U\leq_{\phi}V$ (i.e. it $U$ is a subspace and $\phi$-invariant), then $U\leq_{\phi^k}V$ for all $k\in \mathbb{N}$.

Now I want to show that if $\phi$ is invertible and if $U\leq_{\phi}V$, then $U\leq_{\phi^z}V$ for all $z\in \mathbb{Z}$. My idea is the following:

If $z=:n\in \mathbb{Z}_{> 0}$, so $z=n\in \mathbb{N}$, the from the previous result it follows that $U$ $\ \phi^n$-invariant.

Since $\phi$ isinvertible there is a linear map $\chi$ such that $\chi:=\phi^{-1}$.

If $z=:-n\in \mathbb{Z}_{< 0}$, with $n\in \mathbb{N}$, then $\phi^{-n}=\left (\phi^{-1}\right )^n=\chi^n$. Then if we show that $U$ is $\chi$-invariant then it follows from the previous result that $U$ is $\ \phi^{-n}$-invariant. Is that correct?

Or do we show that in an other way?

:unsure:
 
Physics news on Phys.org
mathmari said:
Let $\mathbb{K}$ be a field and let $V$ be a $\mathbb{K}$-vector space.

Let $\phi,\psi:V\rightarrow V$ be linear maps, such that $\phi\circ\psi=\psi\circ\phi$.

I have shown using induction that if $U\leq_{\phi}V$ (i.e. it $U$ is a subspace and $\phi$-invariant), then $U\leq_{\phi^k}V$ for all $k\in \mathbb{N}$.

Now I want to show that if $\phi$ is invertible and if $U\leq_{\phi}V$, then $U\leq_{\phi^z}V$ for all $z\in \mathbb{Z}$.My idea is the following:

If $z=:n\in \mathbb{Z}_{> 0}$, so $z=n\in \mathbb{N}$, the from the previous result it follows that $U$ $\ \phi^n$-invariant.

Since $\phi$ isinvertible there is a linear map $\chi$ such that $\chi:=\phi^{-1}$.

If $z=:-n\in \mathbb{Z}_{< 0}$, with $n\in \mathbb{N}$, then $\phi^{-n}=\left (\phi^{-1}\right )^n=\chi^n$. Then if we show that $U$ is $\chi$-invariant then it follows from the previous result that $U$ is $\ \phi^{-n}$-invariant.Is that correct?

Or do we show that in an other way?
Hey mathmari!

I don't think it is true. 😢

Counterexample
Consider $\phi=\begin{pmatrix}1&0\\0&0\end{pmatrix}$, $\psi=I_2$, $U=\left\langle e_1\right\rangle$, and $V=\mathbb R^2$.
We have $U\le_\phi V$, but we don't have $U\le_{\phi^{-1}}V$ do we? (Worried)
 
Klaas van Aarsen said:
Hey mathmari!

I don't think it is true. 😢

Counterexample
Consider $\phi=\begin{pmatrix}1&0\\0&0\end{pmatrix}$, $\psi=I_2$, $U=\left\langle e_1\right\rangle$, and $V=\mathbb R^2$.
We have $U\le_\phi V$, but we don't have $U\le_{\phi^{-1}}V$ do we? (Worried)
But that $\phi$ is not invertible. I think that the result is true if $\phi$ is invertible and the space $V$ is finite-dimensional over $\Bbb K$, but not if $V$ is infinite-dimensional.

For example, if $V$ has a basis $\{e_n:n\in\Bbb{Z}\}$, $U$ is the subspace spanned by $\{e_n:n\geqslant0\}$ and $\phi$ is the shift map given by $\phi(e_n) = \phi_{n+1}$, then $U$ is invariant under $\phi$, but not under the inverse map $\phi^{-1}$, which is the backward shift map taking $e_0$ to $e_{-1}$ which is not in $U$.
 
So in general the statement of the exercise is not true, only if we consider some restrictions? :unsure:
 
mathmari said:
So in general the statement of the exercise is not true, only if we consider some restrictions?

If $V$ is allowed to be infinite-dimensional, it is not generally true as Opalg pointed out.

Still, if $V$ is finite-dimensional, we can prove it.

Since $U\le_\phi V$, we have that $\phi(U)$ is a linear subspace of $U$.
Let $\{u_i\}$ be a basis of $U$.
Since $\phi$ is invertible, $\{\phi(u_i)\}$ must be a set of independent vectors.
As they must also be in $U$, it follows that $\{\phi(u_i)\}$ is a basis of $U$ as well.
Therefore $\phi^{-1}(u_i)$ are all in $U$ and we have that $U\le_{\phi^{-1}} V$. :geek:

The result for $\phi^{-n}$ follows as you already found. (Nod)

That leaves the case $z=0$, which is trivial since $U\le_{\text{id}} V$.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 39 ·
2
Replies
39
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 26 ·
Replies
26
Views
687
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 9 ·
Replies
9
Views
1K
  • · Replies 12 ·
Replies
12
Views
2K
Replies
10
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 24 ·
Replies
24
Views
2K