# Dual basis problem. (Linear Algebra)

## Homework Statement

Prove that if m < n and if y_1,...,y_m are linear functionals on an n-dimensional vector space V, then there exists a non-zero vector x in V such that [x,y_j] = 0 for j = 1,..., m

## The Attempt at a Solution

My thinking is somehow that we should write every functional in terms of its value at the basis vectors, but well, I'm not really sure what to do. Any help would be nice.

If e1, e2, ..., en are the basis vectors, we wish to find x = a1e1 + a2e2 + ... + anen such that

$$\left[ \begin{array}{c} y_1(a_1e_1 + a_2e_2 + \ldots + a_ne_n) \\ y_2(a_1e_1 + a_2e_2 + \ldots + a_ne_n) \\ \vdots \\ y_m(a_1e_1 + a_2e_2 + \ldots + a_ne_n) \end{array}\right] = \left[ \begin{array}{c} 0 \\ 0 \\ \vdots \\ 0 \end{array}\right]$$​

All I've done is rephrased the problem. From this, do you see how you would find a1, a2, ..., an?

I've reached that point before, but from there I'm kinda stuck. My first thought was letting the first m entries of the vector be 0, but that wouldn't do anything. I thought something about representing each linear functional as a linear combination of the dual base vectors, but well, not completly sure how that'd help.

Office_Shredder
Staff Emeritus
Gold Member
Can you go from a basis of the dual space to find a corresponding basis in the original space?

In continuation of my earlier advice, you will need to use the fact that each yj is a linear functional.

Tedjn: I've used the properties to rewrite them into the form:
a_1y_1(e_1)+...+a_ny_1(e_n) = 0
etc. for all functionals. But still, nothing.
Office_Shredder:
I don't really understand what you're hinting at, sorry.

Each yj(ei) is a constant element of the field. If V is a real vector space, then each yj(ei) is a constant real number. Does that format remind you of anything?

Tedjn:
Linear systems! Then if each y_j(e_i) produces a constant real number, we have a system of linear equations in n unknowns with m equations, right?

I believe that should work.

Then we have to know that it has a non-trivial solution, right? Is there any way to know that besides reasoning that builds on "matrices"?

And oh, there's a small follow up-question:
"What does this result say in general about the solutions of linear equations?"
I'd say that if we have n unknowns and m equations, there's a non-trivial solution, but that's just me.

It may be possible; maybe you can approach it from a different angle by working with the dual basis as Office_Shredder suggested earlier. Personally, I prefer to think of this solution as one that builds on our knowledge of systems of linear equations, where matrices are just a form of bookkeeping for our coefficients.

EDIT: To the follow up, indeed it tells us that if m < n. But since we used that fact in the first place, it does make me wonder whether they were looking for a different solution.