Some questions about reducible matrices and operators

  • Context: Graduate 
  • Thread starter Thread starter Sumanta
  • Start date Start date
  • Tags Tags
    Matrices Operators
Click For Summary
SUMMARY

This discussion centers on the equality of nullity between matrices A and PAP(inverse) in the context of linear transformations between vector spaces. The participants explore the implications of isomorphic vector spaces R1 and R2, and the conditions under which a linear transformation can be defined across different dimensions. Key points include the necessity of invertibility for the transformation matrix P and the relationship between the dimensions of vector spaces V and W. The discussion references standard theorems from linear algebra, particularly those found in Kunze and Hoffman.

PREREQUISITES
  • Understanding of linear transformations and isomorphisms in vector spaces
  • Familiarity with nullity and dimension concepts in linear algebra
  • Knowledge of invertible matrices and their properties
  • Experience with transformation matrices and basis changes
NEXT STEPS
  • Study the properties of isomorphic vector spaces and their implications
  • Learn about the Rank-Nullity Theorem and its applications
  • Explore the concept of basis transformations in linear algebra
  • Review theorems related to linear operators from Kunze and Hoffman
USEFUL FOR

Students and professionals in mathematics, particularly those specializing in linear algebra, as well as educators seeking to clarify concepts related to vector spaces and linear transformations.

Sumanta
Messages
25
Reaction score
0
Hello ,

This is regarding the equality of nullity between A and PAP(inverse).

If my understanding is correct then the thing should be according to the diagram belowV---------------->R1(isomorphic to V)
| |
| |
\/ \/
W----------------->R2(isomorphic to W)So A : V->W
P: V->R1
P: W->R2

This second thing I am not convinced since the size of W is less than V because of the nullity in this case is assumed to be greater than 1.

If my understanding is correct what one does is that sincce R1 is isomorphic to V so

R1---->R2 is R1->V->W->R2. Which means take bases in R1 and apply P(inverse) which is correct since inverse is defined for the isomorphic transformation and then applying A u land up all the vectors from V which u had got from R1 to 0 in W. Then u apply P to all the vectors and claim that none of them go to 0 in R2 other than the 0s of W. What is confusing is that how do u define the same linear transformation in both the two vector spaces of two different dimensions in general and if possible how do u prove that only the 0s in W go to 0s in R2 and no body else.
 
Last edited:
Physics news on Phys.org
How can P be both a map from V to R_1, and W to R_2? It can't and it isn't.

If P is invertible, then it is square, as well. So even if we assume A is a map from V to V, that is V is isomorphic to W, then it is not correct to assert "W has smaller size [dimension?] than V".By equality of nullity do you mean the dimension of the kernel/null space? That is simple - show that they are isomorphic vector spaces. (P is invertible).
 
Actually after I wrote down the query I happened to refer again to Kunze Huffman and found that this is a standard theorem regarding transformation of linear operator from one basis to another.

Then I realized that the point which was not clear was that if Tv is a vector in basis B then how could with respect to B' I could write P(Tv) where P is the matrix of transformation from B to B'. What is unclear is that when u are doing this u are actually trying to premultiply a vector which is already in the space W, but according to theorem 8 on pg 53 of the book it says that

Suppose I is an n x n invertible matrix over F. Let V be an n dimensional vector space over F and let B be an ordered basis of V . Then there exists unique ordered basis B' of V such that [alpha] in basis B= P[alpha ] in basis B'.

So how is it here the vector space in which the vector is going to reside and the basis are completely different. Am I missing something very obvious. My thinking is that probably even if the space W has smaller dimension than V it is extended by adding 0s to it to equate V and then trying to apply the above technique. Still I am highly confused of applying a matrix NxN which would transform V -> V on something in W.

Sorry but I do not know how to use the subscripts here for clarity but hopefully I have been able to make my doubt across.
 
Last edited:

Similar threads

  • · Replies 13 ·
Replies
13
Views
5K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K