Is Im(A) Equal to Im(AV) for an Invertible Matrix V?

Click For Summary

Homework Help Overview

The problem involves demonstrating the equality of the image of a matrix A and the image of the product of A with an invertible matrix V, specifically showing that im(A) = im(AV) for an mxn matrix A and an invertible nxn matrix V.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning

Approaches and Questions Raised

  • Participants discuss the meaning of the image of a matrix and how it relates to the span of its columns. There are inquiries about the implications of a vector being in im(A) versus im(AV) and the role of linear transformations in establishing the relationship between these images.

Discussion Status

The discussion has progressed with participants exploring the implications of linear transformations and the associative property. Some participants have articulated reasoning for inclusions in the image sets, while others have confirmed understanding of the concepts discussed.

Contextual Notes

There is an emphasis on understanding the definitions and properties of matrix images, as well as the implications of using an invertible matrix in the context of linear transformations.

Abtinnn
Messages
58
Reaction score
7

Homework Statement


[/B]
If A is an mxn matrix, show that for each invertible nxn matrix V, im(A) = im(AV)

Homework Equations


none

The Attempt at a Solution


I know that im(A) can also be written as the span of columns of A.
I also know that AV = [Av1 Av2 ... Avn]
so im(AV) is the span of the columns of that matrix. However, I don't understand how the two can be equal.
 
Physics news on Phys.org
Forget the spanning vectors for a moment. What does it mean for a vector x to be in ##im A##. ##im (A V)## resp.?
 
fresh_42 said:
Forget the spanning vectors for a moment. What does it mean for a vector x to be in ##im A##. ##im (A V)## resp.?
If A is mxn and y ∈ im(A), then y can be written as Ax, where x ∈ Rn.
If y ∈ im(AV) then y can be written as (AV)x, where x ∈ Rn.
 
Right. Now all you need is the associative law for linear functions for one inclusion and to put ##V \cdot V^{-1} = 1## somewhere in between for the other inclusion. ##im (A \cdot V) ⊆ I am A## and ##im (A \cdot V) ⊇ I am A##

Actually you've already proved one inclusion by explaining to me.
 
  • Like
Likes   Reactions: Abtinnn
fresh_42 said:
Right. Now all you need is the associative law for linear functions for one inclusion and to put ##V \cdot V^{-1} = 1## somewhere in between for the other inclusion. ##im (A \cdot V) ⊆ I am A## and ##im (A \cdot V) ⊇ I am A##

Actually you've already proved one inclusion by explaining to me.

I believe I understand it! Could you please check if I've got it right?

Assume y ∈ I am A
then y = Ax = (AVV-1)x
y = AV(V-1x)
since V-1x ∈ Rn, then y ∈ im(AV) and im(A) ⊆ im(AV)

Assume y ∈ I am AV
then y = AVx = A(Vx)
since Vx ∈ Rn, then y ∈ im(A) and im(AV) ⊆ im(A)

Therefore im(A) = im(AV).
 
yep
 
  • Like
Likes   Reactions: Abtinnn
Thanks a lot! I really appreciate it :)
 
You're welcome.
 

Similar threads

Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
17
Views
3K
Replies
15
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 26 ·
Replies
26
Views
8K
  • · Replies 13 ·
Replies
13
Views
2K
Replies
2
Views
2K