# Uniqueness of linear maps

Friedberg proves the following theorem:
Let V and W be vector spaces over a common field F, and suppose that V is finite-dimensional with a basis $\{ x_{1}...x_{n} \}.$ For any vectors $y_{1}...y_{n}$ in W, there exists exactly one linear transformation $T: V → W$ such that $T(x_{i}) = y_{i}$ for i = 1,...,n.

He uses this theorem to assert the following:
Suppose that V and W are finite-dimensional vector spaces with ordered bases $= \{ x_{1}...x_{n} \}$ and $= \{ y_{1}...y_{m} \}$, respectively. Let $T: V →W$ be a linear map. Then there exist scalars $a_{ij} \in F \mbox{ (i = 1,..., m and j = 1,...,n) }$ such that

$$T(x_{j}) = \sum^{m}_{i=1}a_{ij}y_{i} \mbox{ for } 1 \leq j \leq n$$

He doesn't really prove the assertion he makes regarding the matrix representation, and it is not obvious to me. Obviously the first theorem (regarding the action of linear maps upon bases) is necessary to show that the matrix representation exists and is the unique linear map satisfying the action of the linear map upon bases. The problem is that in the first theorem, he uses n vectors from W. In the second theorem, the basis he uses for W has m vectors which may or may not be equal to n. If it is not equal to n, why is he allowed to use the theorem?

I apologize if I'm not clear enough. Please let me know which part is not clear and I will clarify further.

BiP

Last edited:

## Answers and Replies

The assertion doesn't really have anything to do with the theorem you mention. All you need in the assertion is the fact that ##(y_1,...,y_m)## form a basis. By definition of a basis, we know that for any ##y\in W##, there exists ##\alpha_1,...,\alpha_m\in F## such that

$$y = \sum_{i=1}^m \alpha_i y_i$$

This is all you're using. Indeed, just apply this observation to ##y = T(x_j)##.

So simply the existence of the ##\alpha_{ij}## doesn't need the theorem you mention. However, the converse (that the ##\alpha_{ij}## determine some linear map) does need the theorem.