# A real parameter guaranteeing subspace invariance

• TheSodesa
In summary: First, assuming two matrices satisfy ##AB = \alpha BA## is not going to lead to ##A = \alpha B## or anything like that. Also, that's not the sort of thing we are trying to show!So, let's go back to:##0 = \alpha B(Ax)##Let me do the next step:Either ##\alpha = 0##or ##B(Ax) = 0##What...?You can't make either of those assumptions. You can only conclude that ##B(Ax) = 0##, or what is the same thing, that ##Ax \in N(B)##.So, what's the conclusion?In summary, we can conclude
TheSodesa

## Homework Statement

Let ##A## and ##B## be square matrices, such that ##AB = \alpha BA##. Investigate, with which value of ##\alpha \in \mathbb{R}## the subspace ##N(B)## is ##A##-invariant.

## Homework Equations

If ##S## is a subspace and ##A \in \mathbb{C}^{n \times n}##, we define multiplying ##S## by ##A## as follows:

AS = \{ A \vec{x}: \vec{x} \in S \}

The subspace ##S## is said to be ##A##-invariant, if

AS \subset S

The null space of a matrix ##A \in \mathbb{C}^{n \times n}## is

N(A) = \{ \vec{x} \in \mathbb{C}^{n}: A \vec{x} = \vec{0} \}

## The Attempt at a Solution

Alright, I have no idea what I'm doing and I came close to failing (or failed, the exam results haven't come back yet) my matrix algebra course, but here goes:

Let us assume that ##N(B)## is indeed ##A##-invariant, and that ##AB = \alpha BA##. Then according to ##(2)##:
$$A N(B) \subset N(B)$$

Also, solving for ##\alpha## in the given equation gives us (assuming ##A## and ##B## are non-singular):
\begin{align*}
&& AB &= \alpha BA\\
&&\iff\\
&& ABA^{-1} &=\alpha B\\
& &\iff\\
&& AB A^{-1} B^{-1} &= \alpha I,
\end{align*}

Looking at the middle equation, ##\alpha B## seems to similar with ##B##. I wonder if this is something I can use to my advantage. Regardless, I'm not sure how to progress from here on out. Like I said, my understanding of the theory is on a very shaky foundation, so I really could use some help.

TheSodesa said:
Like I said, my understanding of the theory is on a very shaky foundation, so I really could use some help.

What about starting more simply:

Let ##x \in N(B)## and consider ##Ax##. Is this in ##N(B)## or not?

PeroK said:
What about starting more simply:

Let ##x \in N(B)## and consider ##Ax##. Is this in ##N(B)## or not?

Well, since ##N(B)## is ##A##-similar, yes it is, since ##AN(B)## is a subset of ##N(B)##. All of the vectors in ##AN(B)## can be found in ##N(B)##.

Hmm. Then, since the null space is a subspace, any linear combination of vectors in ##N(B)## is also in ##N(B)##, so ##\alpha \vec{x}## is also in ##N(B)##. ##\alpha A \vec{x}## should also be in ##N(B)##, because of ##N(B)##'s ##A##-invariance. Am I going in the right direction?

I'm not sure how I can make the connection between these ideas and the equation I'm supposed to solve. I can't plug an ##\vec{x}## or an ##\alpha \vec{x}## in there anywhere, can I?

TheSodesa said:
Well, since ##N(B)## is ##A##-similar, yes it is, since ##AN(B)## is a subset of ##N(B)##. All of the vectors in ##AN(B)## can be found in ##N(B)##.

That's assuming what you are trying to show. Why not start with what you were given:

##AB = \alpha BA##

Then, as I suggested above:

PeroK said:
Let ##x \in N(B)## and consider ##Ax##. Is this in ##N(B)## or not?

PeroK said:
That's assuming what you are trying to show. Why not start with what you were given:

##AB = \alpha BA##

Then, as I suggested above:

So finding what ##\alpha## is is equivalent to showing, that all of the vectors ##A\vec{x} \in N(B)##?

If I multiply the given equation with ##\vec{x} \in N(B)## from the right, I get:
\begin{align*}
AB\vec{x} &= \alpha BA \vec{x}\\
\vec{0} &= \alpha BA \vec{x}
\end{align*}
Now assuming ##B##, ##A## and ##\vec{x}## are all non-zero matrices/vectors, the only way the right side can be equal to zero is if ##\alpha = 0## or ##A\vec{x} = \vec{0}## (or both, if you want to get pedantic about this). If ##A\vec{x} = \vec{0}##, that would imply that ##A\vec{x}## is indeed in ##N(B)##, and ##N(B)## is ##A##-invariant?

TheSodesa said:
So finding what ##\alpha## is is equivalent to showing, that all of the vectors ##A\vec{x} \in N(B)##?

If I multiply the equation with ##\vec{x} \in N(B)## from the right, I get:
\begin{align*}
AB\vec{x} &= \alpha BA \vec{x}\\
\vec{0} &= \alpha BA \vec{x}
\end{align*}
Now assuming ##B##, ##A## and ##\vec{x}## are all non-zero matrices/vectors, the only way the right side can be equal to zero is if ##\alpha = 0## or ##A\vec{x} = \vec{0}## (or both, if you want to get pedantic about this). If ##A\vec{x} = \vec{0}##, that would imply that ##A\vec{x}## is indeed in ##N(B)##, and ##N(B)## is ##A##-invariant?

You've got the basis of the argument, but you need to work on your logic. Also, there is no reason to assume that "##B##, ##A## and ##\vec{x}## are all non-zero matrices/vectors". In fact, you cannot make any of those assumptions at all.

To try to tighten up your logic, let's go back to:

##0 = A(Bx) = (AB)x = (\alpha BA)x = \alpha B(Ax)##

What can you deduce from that?

PeroK said:
You've got the basis of the argument, but you need to work on your logic. Also, there is no reason to assume that "##B##, ##A## and ##\vec{x}## are all non-zero matrices/vectors". In fact, you cannot make any of those assumptions at all.

To try to tighten up your logic, let's go back to:

##0 = A(Bx) = (AB)x = (\alpha BA)x = \alpha B(Ax)##

What can you deduce from that?

I'm a bit hesitant to say this, but looking at $$A(B\vec{x}) = \alpha B(A\vec{x})$$ almost makes it look like $$A = \alpha B$$

TheSodesa said:
I'm a bit hesitant to say this, but looking at $$A(B\vec{x}) = \alpha B(A\vec{x})$$ almost makes it look like $$A = \alpha B$$

First, assuming two matrices satisfy ##AB = \alpha BA## is not going to lead to ##A = \alpha B## or anything like that. Also, that's not the sort of thing we are trying to show!

So, let's go back to:

##0 = \alpha B(Ax)##

Let me do the next step:

Either ##\alpha = 0##

or ##B(Ax) = 0##

What next?

PeroK said:
First, assuming two matrices satisfy ##AB = \alpha BA## is not going to lead to ##A = \alpha B## or anything like that. Also, that's not the sort of thing we are trying to show!

So, let's go back to:

##0 = \alpha B(Ax)##

Let me do the next step:

Either ##\alpha = 0##

or ##B(Ax) = 0##

What next?

I assume we are not allowed to assume non-singularity for either ##A## nor ##B##, so I can't just say that $$B^{-1} B (A\vec{x}) = A\vec{x} = \vec{0}$$, which would imply, that either ##\alpha = 0## or ##A\vec{x} \in N(B)##?

If we are allowed to assume non-singularity, then ##A \alpha \vec{x}## is also in ##N(B)##, and ##\alpha## can be any real number.

TheSodesa said:
I assume we are not allowed to assume non-singularity for either ##A## nor ##B##, so I can't just say that $$B^{-1} B (A\vec{x}) = A\vec{x} = \vec{0}$$, which would imply, that either ##\alpha = 0## or ##A\vec{x} \in N(B)##.

Nothing like that. We already know that ##\alpha = 0## is one option. For the second case, what if we let ##y = Ax##, then we have:

##By = 0##

What does that tell you about ##y##?

PeroK said:
Nothing like that. We already know that ##\alpha = 0## is one option. For the second case, what if we let ##y = Ax##, then we have:

##By = 0##

What does that tell you about ##y##?

That ##\vec{y} = A \vec{x}## is also in ##N(B)##, which implies that ##AN(B) \subset N(B)##. Now since ##N(B)## is a subspace, I could take any non-zero real number ##\alpha##, so that ##\alpha A \vec{x} \in N(B)##, right?

TheSodesa said:
That ##\vec{y} = A \vec{x}## is also in ##N(B)##, which implies that ##AN(B) \subset N(B)##.

You were right up to here. Then you went off the point!

Try to summarise what we've done so far. It's spread over a number of posts, so try to pull it all together into a logical argument that starts with:

Let ##AB = \alpha BA##

And ends with:

##\alpha = 0## or ##N(B)## is ##A##-invariant.

PeroK said:
You were right up to here. Then you went off the point!

Try to summarise what we've done so far. It's spread over a number of posts, so try to pull it all together into a logical argument that starts with:

Let ##AB = \alpha BA##

And ends with:

##\alpha = 0## or ##N(B)## is ##A##-invariant.

Ahh, wait, I think I see it now. Since we are not allowed to make any other assumptions other than the given ##AB = \alpha BA##, the only thing we can say for certain is that if we want ##N(B)## to be ##A##-invariant, ##\alpha## must not be equal to zero, but it can be any other real number.

Is this correct?

TheSodesa said:
Ahh, wait, I think I see it now. Since we are not allowed to make any other assumptions other than the given ##AB = \alpha BA##, the only thing we can say for certain is that if we want ##N(B)## to be ##A##-invariant, ##\alpha## must not be equal to zero, but it can be any other real number.

Is this correct?

Pretty much, but again your logic is loose. In purely mathematical language, I'd say we've proved:

##AB = \alpha BA \ \Rightarrow \ \alpha = 0 \ ## or ## \ N(B)## is ##A##-invariant.

What you can do with this is say:

## \alpha \ne 0## and ##AB = \alpha BA \ \Rightarrow \ N(B)## is ##A##-invariant. (1)

What we haven't done is said anything further about the case ##\alpha = 0##. In other words:

##AB = 0 \ \Rightarrow \ ?##

We haven't shown anything one way or the other in this case about the ##A##-invariance of ##N(B)##

In any case, I think you should try to do a clear, logical proof of statement (1), based on this thread.

PeroK said:
Pretty much, but again your logic is loose. In purely mathematical language, I'd say we've proved:

##AB = \alpha BA \ \Rightarrow \ \alpha = 0 \ ## or ## \ N(B)## is ##A##-invariant.

What you can do with this is say:

## \alpha \ne 0## and ##AB = \alpha BA \ \Rightarrow \ N(B)## is ##A##-invariant. (1)

What we haven't done is said anything further about the case ##\alpha = 0##. In other words:

##AB = 0 \ \Rightarrow \ ?##

We haven't shown anything one way or the other in this case about the ##A##-invariance of ##N(B)##

In any case, I think you should try to do a clear, logical proof of statement (1), based on this thread.

Alright.

Let ##AB = \alpha BA## and ##\vec{x} \in N(B)##. Now
\begin{align*}
AB &= \alpha BA &\iff\\
A \overbrace{B \vec{x}}^{ \vec{0} \text{ by definition} } &= \alpha BA \vec{x} &\iff\\
\vec{0} &= \alpha BA \vec{x}\\
\end{align*}

This is true if ##\alpha = 0## (inclusive) or ##BA\vec{x} = \vec{0}##. Now to make things clearer, let us denote ##A\vec{x} = \vec{y}##. If we now assume, that ##\alpha \neq 0##, ##BA \vec{x} = B \vec{y} = 0 \iff \vec{y} \in N(B)##.

Therefore ##\alpha \neq 0 \Rightarrow AN(B) \subset N(B)##.

Next I should probably try to show that ##\alpha = 0 \Rightarrow AN(B) \not\subset N(B)##.

TheSodesa said:
Alright.

Let ##AB = \alpha BA## and ##\vec{x} \in N(B)##. Now
\begin{align*}
AB &= \alpha BA &\iff\\
A \overbrace{B \vec{x}}^{ \vec{0} \text{ by definition} } &= \alpha BA \vec{x} &\iff\\
\vec{0} &= \alpha BA \vec{x}\\
\end{align*}

This is true if ##\alpha = 0## (inclusive) or ##BA\vec{x} = \vec{0}##. Now to make things clearer, let us denote ##A\vec{x} = \vec{y}##. If we now assume, that ##\alpha \neq 0##, ##BA \vec{x} = B \vec{y} = 0 \iff \vec{y} \in N(B)##.

Therefore ##\alpha \neq 0 \Rightarrow AN(B) \subset N(B)##.

Next I should probably try to show that ##\alpha = 0 \Rightarrow AN(B) \not\subset N(B)##.

Yes, I think that's good.

On the last point, perhaps you might have both cases. Can you find ##A_1, B_1## where

##A_1B_1 = 0## and ##N(B_1)## is ##A_1##-invariant.

And, find ##A_2, B_2## where:

##A_2B_2 = 0## and ##N(B_2)## is not ##A_2##-invariant?

Hint: try some simple 2x2 matrices.

PeroK said:
Yes, I think that's good.

On the last point, perhaps you might have both cases. Can you find ##A_1, B_1## where

##A_1B_1 = 0## and ##N(B_1)## is ##A_1##-invariant.

And, find ##A_2, B_2## where:

##A_2B_2 = 0## and ##N(B_2)## is not ##A_2##-invariant?

Hint: try some simple 2x2 matrices.

Pretty much every single simple 2x2 matrix I tried in Matlab row-reduces into the identity matrix ##I_2##, as does their product. Also, every random 2x2 matrix generated seems to produce the identity matrix when row-reduced.

The null space of the identity matrix is just the zero vector: ##N(I_n) = span\{\vec{0}\}##. This is supposed to be a huge red flag, isn't it?

TheSodesa said:
Pretty much every single simple 2x2 matrix I tried in Matlab row-reduces into the identity matrix ##I_2##, as does their product. Also, every random 2x2 matrix generated seems to produce the identity matrix when row-reduced.

The null space of the identity matrix is just the zero vector: ##N(I_n) = span\{\vec{0}\}##. This is supposed to be a huge red flag, isn't it?

The identity matrix isn't much good, but perhaps the ##0## matrix is something to try?

Also, can you find two non-zero matrices whose product is the ##0## matrix?

PeroK said:
The identity matrix isn't much good, but perhaps the ##0## matrix is something to try?

Also, can you find two non-zero matrices whose product is the ##0## matrix?

What know is that if I wanted the product ##AB=0##, all of the column vectors of ##B## would have to be in ##N(A)##.

TheSodesa said:
What know is that if I wanted the product ##AB=0##, all of the column vectors of ##B## would have to be in ##N(A)##.

Do you understand the concept of an example or counterexample?

If I said: all matrices commute, how would you disprove that?

PeroK said:
Do you understand the concept of an example or counterexample?

If I said: all matrices commute, how would you disprove that?

Sorry for being a pain in the butt.

By finding a specific example of two non-commuting matrices.
So what I should do is come up with a specific example of ##AB = 0## and ##A N(B) \not\subset N(B)##. Assuming the claim was that for all ##\alpha \in \mathbb{R}##, ##A N(B) \subset N(B)##.

TheSodesa said:
Sorry for being a pain in the butt.

By finding a specific example of two non-commuting matrices.
So what I should do is come up with a specific example of ##AB = 0## and ##A N(B) \not\subset N(B)##.

Yes, absolutely. And the opposite. Two different matrices ##A_2B_2 = 0## and ##A_2 N(B_2) \subset N(B_2)##

See post #16.

That will show that the property ##AB = 0## does not determine the ##A##-invariance of ##N(B)## or not. So, there is nothing more you can prove.

PeroK said:
Yes, absolutely. And the opposite. Two different matrices ##A_2B_2 = 0## and ##A_2 N(B_2) \subset N(B_2)##

See post #16.

That will show that the property ##AB = 0## does not determine the ##A##-invariance of ##N(B)## or not. So, there is nothing more you can prove.

Alright, so I found a counterexample. If (notice the non-sigularity of ##A_1##)
$$A_1 = \begin{bmatrix} 1 & 2\\ 3 & 4 \end{bmatrix} \Rightarrow N(A_1) = span\{\vec{0}\} \Rightarrow B_1= \begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix} \Rightarrow N(B_1) = \mathbb{C}^2$$

If we now choose ##\vec{x_1} \in N(B_1)##, e.g. ##\vec{x_1} = [1, 1]^{T}##,

$$A_1 \vec{x_1} = [3,7]^T,$$
which is in ##N(B_1)##.

On the other hand, if we choose a singular matrix
$$A_2 = \begin{bmatrix} 2 & 0\\ 0 & 0 \end{bmatrix} \Rightarrow N(A_2) = span\{[0,1]^T\} \Rightarrow B_2= \begin{bmatrix} 0 & 0\\ 2 & 4 \end{bmatrix} \Rightarrow N(B_2) = span\{ [-2,1]^T \}$$

Then if we take a vector from ##N(B_2)##, e.g. ##x_2 = [-2,1]^T##,
$$A_2 \vec{x_2} = [-4,0]^T,$$
which is not in ##N(B_2)##. We have therefore shown a counterexample to the claim:
$$\alpha = 0 \Rightarrow AN(B) \subset N(B)$$
for all ##A## and ##B##.

Therefore, in order for the claim to hold for certain, ##\alpha## must not be equal to zero.

TheSodesa said:
Alright, so I found a counterexample. If (notice the non-sigularity of ##A_1##)
$$A_1 = \begin{bmatrix} 1 & 2\\ 3 & 4 \end{bmatrix} \Rightarrow N(A_1) = span\{\vec{0}\} \Rightarrow B_1= \begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix} \Rightarrow N(B_1) = \mathbb{C}^2$$

If we now choose ##\vec{x_1} \in N(B_1)##, e.g. ##\vec{x_1} = [1, 1]^{T}##,

$$A_1 \vec{x_1} = [3,7]^T,$$
which is in ##N(B_1)##

Okay, but you can't just choose one vector. To show ##N(B)## is ##A##-invariant, you have to show that ##x \in N(B) \ \Rightarrow \ Ax \in N(B)##

A simpler approach is:

Let ##B## be the zero matrix and ##A## any matrix. Then ##N(B)## is the entire space, hence ##A##-invariant for any ##A##.

That's an example of ##A##-invariance.

TheSodesa said:
On the other hand, if we choose a singular matrix
$$A_2 = \begin{bmatrix} 2 & 0\\ 0 & 0 \end{bmatrix} \Rightarrow N(A_2) = span\{[0,1]^T\} \Rightarrow B_2= \begin{bmatrix} 0 & 0\\ 2 & 4 \end{bmatrix} \Rightarrow N(B_2) = span\{ [-2,1]^T \}$$

Then if we take a vector from ##N(B_2)##, e.g. ##x_2 = [-2,1]^T##,
$$A_2 \vec{x_2} = [-4,0]^T,$$
which is not in ##N(B_2)##. We have therefore shown a counterexample to the claim:
$$\alpha = 0 \Rightarrow AN(B) \subset N(B)$$
for all ##A## and ##B##.

Therefore, in order for the claim to hold for certain, ##\alpha## must not be equal to zero.

Yes, in this case you just have to find a single vector in ##N(B)## that ##A## maps outside of ##N(B)##. That's the example I had (although I just used):

##A_2=
\begin{bmatrix}
1 & 0\\
0 & 0
\end{bmatrix}##

##B_2=
\begin{bmatrix}
0 & 0\\
1 & 1
\end{bmatrix}##

PeroK said:
Okay, but you can't just choose one vector. To show ##N(B)## is ##A##-invariant, you have to show that ##x \in N(B) \ \Rightarrow \ Ax \in N(B)##

A simpler approach is:

Let ##B## be the zero matrix and ##A## any matrix. Then ##N(B)## is the entire space, hence ##A##-invariant for any ##A##.

That's an example of ##A##-invariance.

Well, this just goes to show how little I've understood of the theory of vectors and matrices, specifically what (sub)spaces are and how they're worked with (and proofs in general). Hey, thanks for your patience. This must have been slightly frustrating.

I will mark this as solved.

TheSodesa said:
Well, this just goes to show how little I've understood of the theory of vectors and matrices, specifically what (sub)spaces are and how they're worked with (and proofs in general). Hey, thanks for your patience. This must have been slightly frustrating.

I will mark this as solved.

I think it's formal proofs and logic in general that you need to keep working on. Linear Algebra is often the first time you have to do these in earnest and it's a common problem.

PeroK said:
I think it's formal proofs and logic in general that you need to keep working on. Linear Algebra is often the first time you have to do these in earnest and it's a common problem.

I have Velleman's How To Prove It in my bookshelf. I'm just having trouble with finding the time to read it and work on the problems. I took slightly too many courses this fall, and the fact that our school's matrix algebra course was restructured (from a 1 semester course to a half semester course while still requiring the same amount of work) because of budget cuts did nothing to help with trying to keep up with the course material, which itself was a truly god-awful handout. There was no book, not even a recommendation from the teacher (I even asked them).

I'm still ever so slightly annoyed by this.

## 1. What is a real parameter guaranteeing subspace invariance?

A real parameter guaranteeing subspace invariance is a mathematical concept used in linear algebra to ensure that a subspace remains unchanged after performing certain operations on it. It is typically denoted as a scalar value and is a necessary condition for a subspace to be considered invariant.

## 2. How is a real parameter used to guarantee subspace invariance?

A real parameter is used to guarantee subspace invariance by satisfying certain conditions in a linear transformation. For example, in a linear transformation T: V → V, where V is a vector space, the subspace W ⊂ V is guaranteed to be invariant if and only if there exists a real parameter λ such that T(w) = λw for all w ∈ W.

## 3. What is the importance of a real parameter in subspace invariance?

A real parameter is important in subspace invariance because it allows us to determine whether a subspace is invariant or not. Invariant subspaces have many practical applications in areas such as physics, engineering, and computer science. They also play a crucial role in the study of linear systems and differential equations.

## 4. How does a real parameter guarantee subspace invariance in a linear transformation?

A real parameter guarantees subspace invariance in a linear transformation by acting as a scaling factor for the basis vectors of the subspace. This ensures that the subspace remains unchanged even after applying the transformation. The value of the real parameter determines the direction and magnitude of the subspace's basis vectors.

## 5. Can a subspace be invariant without a real parameter?

No, a subspace cannot be invariant without a real parameter. A real parameter is a necessary condition for subspace invariance. Without it, the subspace would not remain unchanged after applying a linear transformation. However, it is possible for a subspace to have multiple real parameters that guarantee its invariance, as long as they satisfy the necessary conditions.

• Calculus and Beyond Homework Help
Replies
8
Views
746
• Calculus and Beyond Homework Help
Replies
0
Views
511
• Calculus and Beyond Homework Help
Replies
5
Views
1K
• Calculus and Beyond Homework Help
Replies
15
Views
1K
• Calculus and Beyond Homework Help
Replies
5
Views
6K
• Calculus and Beyond Homework Help
Replies
1
Views
826
• Calculus and Beyond Homework Help
Replies
3
Views
646
• Calculus and Beyond Homework Help
Replies
6
Views
2K
• Calculus and Beyond Homework Help
Replies
8
Views
2K
• Calculus and Beyond Homework Help
Replies
7
Views
1K