What is the Basis for the Polynomial Vector Spaces S, T, and S∩T?

In summary, for the given problem we are asked to find a basis for the subspaces S and T, as well as their intersection S intersect T. S is the subspace of polynomials such that p(0)=0, while T is the subspace of polynomials such that q(1)=0. By grouping the coefficients of the polynomials in the order of their powers, we can represent them as column vectors. Using the condition p(0)=0, we can determine that the basis for S is of the form {x^2, x}. Similarly, using the condition q(1)=0, we can determine that the basis for T is of the form {x^2-1, x-1}.
  • #1
aredian
15
0

Homework Statement


Let S be the subspace P3 consisting of all polynomials P(x) such that p(0) = 0, and let T be the subspace of all polynomials q(x) such that q(1) = 0. Find a basis for S, T and S[tex]\cap[/tex]T

Homework Equations





The Attempt at a Solution


I know that a basis is formed by linearly independant vectors which also generate the space thay belong to. And is true for polynomials that L.I. can be drawn for the coeff matrix that results from the vector grouping the terms by the powers of X.
What I am not sure is what p(0) = 0 and q(1) = 0 means. Is it 0, 0X, 0X^2 = 0 and 1, x, x^2 = 0?

I know it is supposed to be a simple question or at least that's how I see it, but I took my last math course about 7 years ago, and I can't find a resource to clarify that for the moment.

Thanks for your help.
 
Physics news on Phys.org
  • #2
A polynomial p(x) in your space looks like p(x)=a+bx+cx^2. p(0)=0 means a=0. p(1)=0 means a+b+c=0. Nothing mysterious about this...
 
  • #3
OK... Now I am a bit more confused. Does that means the vectors in p(0) = 0 grouped by the powers of X would be of the form (0 0 1)[tex]^{T}[/tex]?

If so, then I take say 2 vectors, A and B, and need to verify they are L.I. and if they span S in order to declare them a basis of S.

How do I get a basis out of 2 vectors of the form (0 0 1)[tex]^{T}[/tex]?
 
  • #4
If you are grouping powers in the order 1,x,x^2, then p(x)=a+bx+cx^2 becomes (a,b,c)^T. p(x)=0 tells you a=0. So the vectors in S look like (0,b,c)^T for any choice of b and c. Now tell me what a basis is.
 
  • #5
You have chosen to identify the space you call P3 with column vectors with 3 elements. How have you done this? How did you associate the polynomials with constant coefficient 0 with a single vector (0,0,1)^t ?
 
  • #6
Dick said:
If you are grouping powers in the order 1,x,x^2, then p(x)=a+bx+cx^2 becomes (a,b,c)^T. p(x)=0 tells you a=0. So the vectors in S look like (0,b,c)^T for any choice of b and c. Now tell me what a basis is.

Since the vectors are of the type (a, b, 0) grouping by x^2, x, 1 then taking a linear combination of these, I can tell a(1,0,0)^T + b(0,1,0) = (x^2, x, 1) and use it to determine if they span S. They DO!. On the other hand the coeff matriz [(1,0,0)^T (0,1,0)^T] is non singular so these vectors are L.I. Thus a basis for S would be of the form {x^2, x}

Correct?

Thank you very much!
 
  • #7
aredian said:
Since the vectors are of the type (a, b, 0) grouping by x^2, x, 1 then taking a linear combination of these, I can tell a(1,0,0)^T + b(0,1,0) = (x^2, x, 1) and use it to determine if they span S. They DO!. On the other hand the coeff matriz [(1,0,0)^T (0,1,0)^T] is non singular so these vectors are L.I. Thus a basis for S would be of the form {x^2, x}

Correct?

Thank you very much!

Correct.
 
  • #8
Dick said:
A polynomial p(x) in your space looks like p(x)=a+bx+cx^2. p(0)=0 means a=0. p(1)=0 means a+b+c=0. Nothing mysterious about this...

Great!

Now for T... Are the vectors of T (p(1) = 0) of the form (a, b, -(a+b)) ? grouping well x^2, x, 1

Im not sure about the vectors in S [tex]\cap[/tex]T. Are they of the form (a, b, c) for c=0 or only (a, b)?

Thanks for your help!
 
Last edited:
  • #9
They are of the form (a,b,c) where a+b+c=0, right? Can you use that relationship to eliminate a variable in the vector and find a basis? A basis vector for S intersect T has to be in both spaces.
 
  • #10
Dick said:
They are of the form (a,b,c) where a+b+c=0, right? Can you use that relationship to eliminate a variable in the vector and find a basis? A basis vector for S intersect T has to be in both spaces.

Ok... since c = -a -b I can use
ax^2 + bx + (-a -b)1. Correct?

This would yield the vectors of the form (a, b, -(a+b)) and their lineal combination would be of the form a(1,0,-1)^T + b(0,1,-1)^T = (x^2, x, 1). The coeff matrix is non singular so they are LI. However something is wrong with my algebra, because I can't seem to find the correct coeffs, such that they span T.

Where these the corect vectors at all?
 
  • #11
Ok, a general vector in T is (a,b,-(a+b)) as you said. a*(1,0,-1)+b*(0,1,-1)=(a,b,-(a+b)), as you implied. So it looks me like they span.
 
  • #12
Dick said:
Ok, a general vector in T is (a,b,-(a+b)) as you said. a*(1,0,-1)+b*(0,1,-1)=(a,b,-(a+b)), as you implied. So it looks me like they span.

Yes indeed and the base is of the form {x^2-1, x-1}

Thanks!
 

What is a polynomial vector space?

A polynomial vector space is a mathematical concept that refers to a set of polynomials that satisfy certain conditions, such as closure under addition and scalar multiplication. It is a generalization of the concept of a vector space, where the elements are not just numbers but polynomials.

What are the basis elements of a polynomial vector space?

The basis elements of a polynomial vector space are the polynomials that form a linearly independent set, meaning no polynomial in the set can be written as a linear combination of the others. These basis elements can be used to express any polynomial in the vector space as a unique linear combination.

How is the dimension of a polynomial vector space determined?

The dimension of a polynomial vector space is determined by the number of basis elements in the set. This can be calculated using the rank-nullity theorem, which states that the dimension is equal to the number of basis elements minus the dimension of the null space.

What is the difference between a finite and infinite dimensional polynomial vector space?

A finite dimensional polynomial vector space has a finite number of basis elements, while an infinite dimensional polynomial vector space has an infinite number of basis elements. In other words, the number of linearly independent polynomials needed to express any polynomial in the vector space is finite in a finite dimensional space and infinite in an infinite dimensional space.

How are polynomial vector spaces used in practical applications?

Polynomial vector spaces have a wide range of applications in various fields of science and engineering. They can be used to model real-world phenomena, such as the motion of objects and the behavior of physical systems. They are also used in computer graphics and image processing algorithms, as well as in data analysis and signal processing.

Similar threads

  • Calculus and Beyond Homework Help
Replies
0
Views
441
  • Calculus and Beyond Homework Help
Replies
24
Views
783
  • Calculus and Beyond Homework Help
Replies
5
Views
499
  • Calculus and Beyond Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
2
Replies
58
Views
3K
  • Calculus and Beyond Homework Help
Replies
8
Views
607
  • Calculus and Beyond Homework Help
Replies
4
Views
653
  • Calculus and Beyond Homework Help
Replies
14
Views
2K
Replies
1
Views
567
Back
Top