Why is the matrix representation of a linear map unique?

  • Context: Graduate 
  • Thread starter Thread starter Bipolarity
  • Start date Start date
  • Tags Tags
    Linear Uniqueness
Click For Summary
SUMMARY

The discussion centers on the uniqueness of the matrix representation of a linear map between finite-dimensional vector spaces V and W, as established by Friedberg's theorem. The theorem states that for any basis in V, there exists a unique linear transformation T: V → W that maps basis vectors to specified vectors in W. The participants clarify that while the theorem ensures the existence of a linear map, the assertion regarding the matrix representation does not require the theorem, as it relies solely on the definition of a basis in W.

PREREQUISITES
  • Understanding of linear transformations and their properties
  • Familiarity with vector spaces and bases
  • Knowledge of finite-dimensional vector spaces
  • Basic concepts of linear algebra, particularly matrix representation
NEXT STEPS
  • Study Friedberg's theorem on linear transformations in detail
  • Explore the concept of bases in vector spaces and their implications
  • Learn about the relationship between linear maps and their matrix representations
  • Investigate the uniqueness of linear transformations in finite-dimensional spaces
USEFUL FOR

Students and professionals in mathematics, particularly those studying linear algebra, as well as educators seeking to clarify concepts related to linear transformations and matrix representations.

Bipolarity
Messages
773
Reaction score
2
Friedberg proves the following theorem:
Let V and W be vector spaces over a common field F, and suppose that V is finite-dimensional with a basis \{ x_{1}...x_{n} \}. For any vectors y_{1}...y_{n} in W, there exists exactly one linear transformation T: V → W such that T(x_{i}) = y_{i} for i = 1,...,n.

He uses this theorem to assert the following:
Suppose that V and W are finite-dimensional vector spaces with ordered bases = \{ x_{1}...x_{n} \} and = \{ y_{1}...y_{m} \}, respectively. Let T: V →W be a linear map. Then there exist scalars a_{ij} \in F \mbox{ (i = 1,..., m and j = 1,...,n) } such that

T(x_{j}) = \sum^{m}_{i=1}a_{ij}y_{i} \mbox{ for } 1 \leq j \leq n

He doesn't really prove the assertion he makes regarding the matrix representation, and it is not obvious to me. Obviously the first theorem (regarding the action of linear maps upon bases) is necessary to show that the matrix representation exists and is the unique linear map satisfying the action of the linear map upon bases. The problem is that in the first theorem, he uses n vectors from W. In the second theorem, the basis he uses for W has m vectors which may or may not be equal to n. If it is not equal to n, why is he allowed to use the theorem?

I apologize if I'm not clear enough. Please let me know which part is not clear and I will clarify further.

BiP
 
Last edited:
Physics news on Phys.org
The assertion doesn't really have anything to do with the theorem you mention. All you need in the assertion is the fact that ##(y_1,...,y_m)## form a basis. By definition of a basis, we know that for any ##y\in W##, there exists ##\alpha_1,...,\alpha_m\in F## such that

y = \sum_{i=1}^m \alpha_i y_i

This is all you're using. Indeed, just apply this observation to ##y = T(x_j)##.

So simply the existence of the ##\alpha_{ij}## doesn't need the theorem you mention. However, the converse (that the ##\alpha_{ij}## determine some linear map) does need the theorem.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K