Why is the matrix representation of a linear map unique?

  • Thread starter Thread starter Bipolarity
  • Start date Start date
  • Tags Tags
    Linear Uniqueness
Click For Summary
The discussion centers on the uniqueness of the matrix representation of a linear map between finite-dimensional vector spaces. Friedberg's theorem asserts that for a finite-dimensional vector space V with a basis and any vectors in W, there exists a unique linear transformation T mapping the basis of V to those vectors in W. The confusion arises regarding the application of this theorem when the dimensions of the bases for V and W differ, questioning the validity of using the theorem in this context. It is clarified that while the existence of coefficients for the linear transformation does not require the theorem, establishing that these coefficients correspond to a linear map does. The conversation emphasizes the importance of understanding the relationship between the dimensions of the vector spaces involved.
Bipolarity
Messages
773
Reaction score
2
Friedberg proves the following theorem:
Let V and W be vector spaces over a common field F, and suppose that V is finite-dimensional with a basis \{ x_{1}...x_{n} \}. For any vectors y_{1}...y_{n} in W, there exists exactly one linear transformation T: V → W such that T(x_{i}) = y_{i} for i = 1,...,n.

He uses this theorem to assert the following:
Suppose that V and W are finite-dimensional vector spaces with ordered bases = \{ x_{1}...x_{n} \} and = \{ y_{1}...y_{m} \}, respectively. Let T: V →W be a linear map. Then there exist scalars a_{ij} \in F \mbox{ (i = 1,..., m and j = 1,...,n) } such that

T(x_{j}) = \sum^{m}_{i=1}a_{ij}y_{i} \mbox{ for } 1 \leq j \leq n

He doesn't really prove the assertion he makes regarding the matrix representation, and it is not obvious to me. Obviously the first theorem (regarding the action of linear maps upon bases) is necessary to show that the matrix representation exists and is the unique linear map satisfying the action of the linear map upon bases. The problem is that in the first theorem, he uses n vectors from W. In the second theorem, the basis he uses for W has m vectors which may or may not be equal to n. If it is not equal to n, why is he allowed to use the theorem?

I apologize if I'm not clear enough. Please let me know which part is not clear and I will clarify further.

BiP
 
Last edited:
Physics news on Phys.org
The assertion doesn't really have anything to do with the theorem you mention. All you need in the assertion is the fact that ##(y_1,...,y_m)## form a basis. By definition of a basis, we know that for any ##y\in W##, there exists ##\alpha_1,...,\alpha_m\in F## such that

y = \sum_{i=1}^m \alpha_i y_i

This is all you're using. Indeed, just apply this observation to ##y = T(x_j)##.

So simply the existence of the ##\alpha_{ij}## doesn't need the theorem you mention. However, the converse (that the ##\alpha_{ij}## determine some linear map) does need the theorem.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K