Is Im(A) Equal to Im(AV) for an Invertible Matrix V?

Click For Summary
SUMMARY

The discussion confirms that for an mxn matrix A and an invertible nxn matrix V, the image of A, denoted as im(A), is equal to the image of the product AV, denoted as im(AV). The proof involves demonstrating that any vector y in im(A) can be expressed as y = Ax, which can also be represented as y = AV(V-1x) for some x in Rn. Conversely, any vector in im(AV) can be shown to belong to im(A) by using the property that Vx is also in Rn. Thus, the conclusion is established that im(A) = im(AV).

PREREQUISITES
  • Understanding of linear transformations and their properties
  • Familiarity with the concept of the image of a matrix
  • Knowledge of invertible matrices and their implications
  • Basic proficiency in matrix multiplication and associative law
NEXT STEPS
  • Study the properties of linear transformations in depth
  • Learn about the implications of matrix invertibility in linear algebra
  • Explore the concept of the span of vectors and its applications
  • Investigate the relationship between matrix rank and image dimensions
USEFUL FOR

Students of linear algebra, mathematicians, and anyone involved in understanding matrix theory and linear transformations.

Abtinnn
Messages
58
Reaction score
7

Homework Statement


[/B]
If A is an mxn matrix, show that for each invertible nxn matrix V, im(A) = im(AV)

Homework Equations


none

The Attempt at a Solution


I know that im(A) can also be written as the span of columns of A.
I also know that AV = [Av1 Av2 ... Avn]
so im(AV) is the span of the columns of that matrix. However, I don't understand how the two can be equal.
 
Physics news on Phys.org
Forget the spanning vectors for a moment. What does it mean for a vector x to be in ##im A##. ##im (A V)## resp.?
 
fresh_42 said:
Forget the spanning vectors for a moment. What does it mean for a vector x to be in ##im A##. ##im (A V)## resp.?
If A is mxn and y ∈ im(A), then y can be written as Ax, where x ∈ Rn.
If y ∈ im(AV) then y can be written as (AV)x, where x ∈ Rn.
 
Right. Now all you need is the associative law for linear functions for one inclusion and to put ##V \cdot V^{-1} = 1## somewhere in between for the other inclusion. ##im (A \cdot V) ⊆ I am A## and ##im (A \cdot V) ⊇ I am A##

Actually you've already proved one inclusion by explaining to me.
 
  • Like
Likes   Reactions: Abtinnn
fresh_42 said:
Right. Now all you need is the associative law for linear functions for one inclusion and to put ##V \cdot V^{-1} = 1## somewhere in between for the other inclusion. ##im (A \cdot V) ⊆ I am A## and ##im (A \cdot V) ⊇ I am A##

Actually you've already proved one inclusion by explaining to me.

I believe I understand it! Could you please check if I've got it right?

Assume y ∈ I am A
then y = Ax = (AVV-1)x
y = AV(V-1x)
since V-1x ∈ Rn, then y ∈ im(AV) and im(A) ⊆ im(AV)

Assume y ∈ I am AV
then y = AVx = A(Vx)
since Vx ∈ Rn, then y ∈ im(A) and im(AV) ⊆ im(A)

Therefore im(A) = im(AV).
 
yep
 
  • Like
Likes   Reactions: Abtinnn
Thanks a lot! I really appreciate it :)
 
You're welcome.
 

Similar threads

Replies
4
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
17
Views
3K
Replies
15
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 26 ·
Replies
26
Views
8K
  • · Replies 13 ·
Replies
13
Views
2K
Replies
2
Views
2K