How do I determine whether a set of polynomials form a basis?

AI Thread Summary
To determine if a set of polynomials forms a basis for P2, they must be linearly independent and span the space. The first set, p1(t) = 3 + t^2 and p2(t) = -1 + 5t + 7t^2, does not span P2, as solutions to the corresponding equations are not always available, despite being linearly independent. In contrast, the second set, p1(t) = 1 + 2t + t^2, p2(t) = -1 + t^2, and p3(t) = 7 + 5t - 6t^2, spans P2 and is linearly independent, thus forming a basis. The discussion emphasizes the importance of both spanning and linear independence in establishing a basis for a vector space. Understanding these concepts is crucial for solving polynomial equations in linear algebra.
NewtonianAlch
Messages
453
Reaction score
0

Homework Statement



Are the following statements true or false? Explain your answers carefully, giving all necessary working.

(1) p_{1}(t) = 3 + t^{2} and p_{2}(t) = -1 +5t +7t^{2} form a basis for P_{2}

(2) p_{1}(t) = 1 + 2t + t^{2}, p_{2}(t) = -1 + t^{2} and p_{3}(t) = 7 + 5t -6t^{2} form a basis for P_{2}



The Attempt at a Solution



So I rendered both (1) and (2) into matrices and did row reduction on them:

[PLAIN]http://img411.imageshack.us/img411/9639/43512286.jpg

I believe (1) does not form a basis for P_{2} because there is no solution even though the vectors are linearly independent. Where as (2) does have a solution and the vectors are linearly independent so therefore it should form a basis.

Thoughts: To form a basis in P_{2} wouldn't you need at least 3 vectors always? In my book it states that to form a basis the vectors need to be linearly independent (which is established) and also must be a spanning set, what does this exactly mean?

It also does an example of 3 vectors just like (2): here's what the end result of their row-reduction looked like:

[PLAIN]http://img443.imageshack.us/img443/489/17419228.jpg

Their comment was

The row-echelon matrix had a non-leading right-hand column and hence the equation Ax=b has a solution. Therefore span(S) = R^{3}.
Moreover, the left side of the row-echelon matrix has no non-leading columns, so the only solution for a zero right-hand side is x_{1} = x_{2} = x_{3} = 0. This shows that S is a linearly indepedent set. We have now proved S is a linearly independent spanning set for R^{3} and is therefore a basis for R^{3}

So does this mean if b3 - 2b2 + b1 = 0, then it wouldn't be a spanning set and hence not a basis? Why is this?
 
Last edited by a moderator:
Physics news on Phys.org
NewtonianAlch said:

Homework Statement



Are the following statements true or false? Explain your answers carefully, giving all necessary working.

(1) p_{1}(t) = 3 + t^{2} and p_{2}(t) = -1 +5t +7t^{2} form a basis for P_{2}

(2) p_{1}(t) = 1 + 2t + t^{2}, p_{2}(t) = -1 + t^{2} and p_{3}(t) = 7 + 5t -6t^{2} form a basis for P_{2}

The Attempt at a Solution



So I rendered both (1) and (2) into matrices and did row reduction on them:

I believe (1) does not form a basis for P_{2} because there is no solution even though the vectors are linearly independent. Where as (2) does have a solution and the vectors are linearly independent so therefore it should form a basis.
To be a bit more precise, (1) has a solution only when
-3b_1 + \frac{22}{5}b_2 + b_3 = 0so it's not that there's never a solution but that there isn't always a solution.
Thoughts: To form a basis in P_{2} wouldn't you need at least 3 vectors always?
Yes, that's right. P2 is a three-dimensional vector space, so any basis for it will have exactly 3 vectors.
In my book it states that to form a basis the vectors need to be linearly independent (which is established) and also must be a spanning set, what does this exactly mean?
When you say a set of vectors {v1, v2, …, vn} spans a space V, that means if you take any element b in V, you can find some linear combination of v1, v2, …, and vn that's equal to b. In other words, you can find a solution to the equation
c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_n\vec{v}_n = \vec{b}
In (1), what you found was that you could find solutions only some of the time, not all of the time. Consequently, even though p1(x) and p2(x) are independent, they do not span P2 and therefore do not form a basis for P2.
So does this mean if b3 - 2b2 + b1 = 0, then it wouldn't be a spanning set and hence not a basis? Why is this?
No, that's not what they mean.
 
So if -3b_1 + \frac{22}{5}b_2 + b_3 = 3 for e.g. would that mean it would be a spanning set? I'm not sure I understand what the distinction is if the equation was equal to zero as opposed to non-zero.

You said:
In (1), what you found was that you could find solutions only some of the time, not all of the time.
How is this so? What is the major distinguishing feature in the the equation that's going to tell me whether it's some of the time or all of the time?

When you say a set of vectors {v1, v2, …, vn} spans a space V, that means if you take any element b in V, you can find some linear combination of v1, v2, …, and vn that's equal to b. In other words, you can find a solution to the equation
c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_n\vec{v}_n = \vec{b}

So if b had elements (x,x,x) just as an example. I'd get a system of equations to solve for c_{1}, c_{2}, c_{3} - wouldn't you always be able to find solutions for c_{x}? Hence any vector b would always be in the span?

Thank you for the very detailed response by the way, it's helped in understanding this.
 
What's the reasoning behind forming the matrix and then row-reducing it? Can you explain that?
 
vela said:
What's the reasoning behind forming the matrix and then row-reducing it? Can you explain that?

So that we can get the values of b_{1..x}? Effectively the scalar multipliers for the vectors in a set to determine if a given vector is in a spanning set?

So if we got b_{1}, b_{2}, b_{3} all equal to 3.

Then 3v_{1} + 3v_{2} + 3v_{3} = b ?
 
Not exactly. In (1), you're trying to find c1 and c2 such that c_1 p_1(t) + c_2 p_2(t) = f(t)where f(t)=b1t2+b2t+b3 is an element of P2. This is the whole point of setting up the matrix and reducing it.

Now if you plug everything in, you get
c_1 (t^2+3) + c_2(7t^2+5t-1) = b_1t^2+b_2t+b_3or
(c_1 + 7c_2)t^2 + (5c_2)t + (3c_1 - c_2) = b_1t^2+b_2t+b_3Matching coefficients from the two sides of the equations, you get
\begin{align*}
c_1 + 7c_2 &= b_1 \\
5c_2 &= b_2 \\
3c_1-c_2 &= b_3
\end{align*}To solve this system of equations, you set up the augmented matrix
\left(\begin{array}{cc|c} 1 & 7 & b_1 \\ 0 & 5 & b_2 \\ 3 & -1 & b_3 \end{array}\right)This is the matrix you formed. Solving this system of equations is equivalent to solving the top equation. And remember you're solving for c1 and c2.


So now what you're doing is using what you learned before about solving systems of equations to see if you can always find a solution, or if there are no solutions, or if there are infinite solutions.

  1. If you have a basis, you should find you get a unique solution for any possible values of the b's.
  2. If you find you get no solution for some values of the b's, that means that some vectors can not be expressed as a linear combination of the vectors. In other words, the vectors don't span the space. This is what you found for problem (1).
  3. If you find you can get an infinite number of solutions, that means the vectors are linearly dependent.
 
Ah, I'm starting to see the picture now. I guess I never really thought about where b_{1}, b_{2}, and b_{3} came about, but now I do. Thanks so much for your help, this was very interesting.
 
Back
Top