Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Existence of basis for P_2 with no polynomial of degree 1

  1. Oct 7, 2016 #1
    I have the following question: Is there a basis for the vector space of polynomials of degree 2 or less consisting of three polynomial vectors ##\{ p_1, p_2, p_3 \}##, where none is a polynomial of degree 1?

    We know that the standard basis for the vector space is ##\{1, t, t^2\}##. However, this wouldn't be allowed because there is a polynomial of degree 1 in this basis.

    I'm thinking that there is not a basis without a polynomial of degree 1, but can't seem to formalize it.
     
  2. jcsd
  3. Oct 7, 2016 #2

    fresh_42

    Staff: Mentor

    Assume there is a basis, ##\{p_1(t),p_2(t),p_3(t)\}##. Since ##q(t)=t## is in the vector space, what does this mean?
     
  4. Oct 7, 2016 #3

    mathman

    User Avatar
    Science Advisor
    Gold Member

    How about [itex](t^2,t^2+1,t^2+t)[/itex]?
     
  5. Oct 7, 2016 #4

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper

    the usual basis for R^3 is (1,0,0), (0,1,0), (0,0,1), but isn't (1,1,1), (0,1,1), (0,0,1) also a basis? what does that translate into for polynomials, where (a,b,c) represents a + bX + cX^2?
     
  6. Oct 7, 2016 #5
    Thanks, it makes sense now. In general, if I know what the standard basis is for a vector space, how could I construct other bases from that standard basis?

    Also, are there any examples of vector spaces where there are a small finite number of bases?
     
  7. Oct 7, 2016 #6

    fresh_42

    Staff: Mentor

    You can take any regular square matrix. Then the row or column vectors interpreted in your "standard" basis are a new one.
    However, the regular matrices over the reals are dense in the set of all matrices, which means: as soon as you vary some matrix entry only by a little, you will get a regular matrix.
    To have a finite number of basis, you first of all need a finite field for otherwise with a basis ## \{b_1,\dots ,b_n\}## and a scalar ##\beta## of your field, ## \{\beta \cdot b_1,\dots ,b_n\}## is also a basis. Infinitely many ##\beta## give infinitely many basis.
    So, e.g. in any ##\mathbb{Z}_p^n## you find a finite number of possible basis over ##\mathbb{Z}_p##.
     
  8. Oct 8, 2016 #7

    Math_QED

    User Avatar
    Homework Helper

    What does the notation with the ##\mathbb{Z}## stand for?
     
  9. Oct 8, 2016 #8

    fresh_42

    Staff: Mentor

    ##\mathbb{Z}_p## denotes the set ##\{0,1,2, \dots ,p-1\}## for a prime number ##p##. They are the possible remainders by integer division by ##p##. If ##p## is a prime, one has an addition and a multiplication in this set, that satisfy all laws of a field, like associativity, distributivity, etc.

    An alternative description is:
    Say two integers are equivalent, if and only if their remainder by division by ##p## is equal. This splits ##\mathbb{Z}## into the ##p## subsets ##0+p \mathbb{Z}, 1+p \mathbb{Z}, 2+ p \mathbb{Z}, \dots , (p-1)+p \mathbb{Z}##. These sets are called cosets with respect to ##p\mathbb{Z}## or equivalence classes. And they allow a field structure, i.e. we can add, subtract, multiply and divide them.

    Everyday examples are ##\mathbb{Z}_2 = \{0,1\}## with which our computers or the light switch on the wall work, or ##\mathbb{Z}_{12}## which shows us the hours on the clock. But ##12## isn't prime. This means, we cannot divide every number: ##3 \cdot 4 = 0## (three times ##4## hours is the null position again) which means division by ##4## is not possible in this case. Therefore ##p## has to be prime.
     
  10. Oct 8, 2016 #9

    Math_QED

    User Avatar
    Homework Helper

    I might be missing something but don't we need multiplication inverses for a field?
     
  11. Oct 8, 2016 #10

    fresh_42

    Staff: Mentor

    Yes. Therefore ##\mathbb{Z}_{12}## isn't a field (only a ring). But all ##\mathbb{Z}_p## with ##p## prime are fields.
    If you have a number ##a \neq 0## and ##a < p## then ##a## and ##p## are coprime and you can find ##\alpha \, , \, \beta## such that ##1=\alpha a + \beta p##. Then the remainder of ##\alpha## from a division by ##p## times the remainder of ##a## from a division by ##p## has the remainder ##1## by a division by ##p##, i.e. ##\alpha a = 1## in ##\mathbb{Z}_p## or ##1/a = \alpha##.
     
  12. Oct 8, 2016 #11

    Stephen Tashi

    User Avatar
    Science Advisor

    For example, if we consider the standard basis for polynomials of degree 2 with real coefficients to be ##\{1, x, x^2\}##, we can say that ##\{1, 2.33\ x, x^2\}## is a different basis. So a "cheap" way to construct a different basis is multiply some of the basis vectors in the standard basis by some non-zero scalars.

    If you want to do something more interesting, you need more complicated algorithms. The general idea would be to replace some of the vectors in the standard basis by linear combinations of themselves and other vectors in the standard basis without introducing any dependence in the new basis. For example, we could replace ##\{1,x,x^2\}## by ##\{1 + x, 1 - x, x^2\}## by using the observation that ## 1 = (1/2)((1+x) + (1-x)) ## and ## x = (1/2)( (1+x) - (1-x) ) ##.

    That's an interesting question!

    If you are studying a text where the field of scalars is assumed to be the real numbers or the complex numbers, the answer is "no" because of what I mentioned above about being able to create a new basis vector by using a scalar multiple of it. However, as other's have mentioned, there are examples of vector spaces where the field of scalars is finite. For example, we can try the set of polynomials in degree at most 2 with coefficients in the integers mod 3. Does that work ? It has only a finite number of elements in it.

    The topic of a "standard basis" highlights the distinction between a "vector" and a "representation of a vector". For example, it is tempting to declare that {(1,0,0), (0,1,0), (0,0,1)} is "the standard basis" for a 3 dimensional vector space. But what "(0,1,0)" represents is ambiguous. For example, if we consider the vector space of polynomials of degree at most 2 with real valued coefficients and take its standard basis to be ##\{1, x, x^2\}## then "(0,1,0)" represents the polynomial ##(0)(1) + (1)(x) + 0(x^2) ##. But in the context of a book on physics, the author would probably assume "(0,1,0)" represents a vector in the y-direction in a cartesian coordinate system.

    So, in an abstract vector space, declaring that a set like {(1,0,0), (0,1,0), (0,0,1)} is "the standard basis" doesn't really define what the standard basis is. A representation like "(0,1,0)" only has meaning after we have specified what the vectors in the "standard basis" are.

    Furthermore, a basis is defined as a set of vectors with certain properties. In a set, the order of the elements doesn't matter. So ##\{1,x,x^2\}## is the same basis as ##\{1,x^2,x\}##. For a representation like "(0,1,0)" to have a definite meaning, we must not only specify the set of basis vectors, we must also specify that they are listed in a definite order.
     
  13. Oct 9, 2016 #12
    Thank you guys for the very detailed discussion. I have a few more questions

    If every real vector space is isomorphic to ##\mathbb{R}^n##, would finding a new (non-trivial) basis from some given basis for any real vector space be only as complicated as it is to find a new basis from a given basis in ##\mathbb{R}^n##? For example, would finding a new basis in ##\mathbb{P}^2## be only as hard as finding a new basis in ##\mathbb{R}^3##?

    Also, to what extent is it possible to describe a vector without any reference to a basis? For example, if I were to describe a vector in ##\mathbb{R}^n##, it seems that I would always need to implicitly refer to the standard basis of ##\mathbb{R}^n##.
     
  14. Oct 9, 2016 #13

    fresh_42

    Staff: Mentor

    Every n-dimensional real vector space. There are also vector spaces of infinite dimension, e.g. continuous functions.
    I'm not sure what you mean by as complicated or as hard.
    If you take a dice and roll some numbers, preferably with some digits right of the point, then the chances are high, that you create a basis if arranged as vectors. Not so hard.
    That's an interesting question. A vector is basically an arrow. It starts somewhere, has a length and a direction where it points to. How would you communicate such an arrow to me? "Somewhere" needs a map, "length" a measure or scale, and "direction" an orientation. Unless we sit together and talk about our drawings, something to measure all of this will be needed.

    Another system, than that which you probably have in mind when you say standard basis, are polar coordinates, a measure system with lengths and angles.
     
  15. Oct 9, 2016 #14

    Stephen Tashi

    User Avatar
    Science Advisor

    Every finite dimensional real vector space is isomorphic to the ##\mathbb{R}^n## of that dimension. However, keep in mind that the definition of "vector space" does not include the concept of the "norm" or "length" of a vector, nor the concept of an inner product or the angle between vectors. So an isomorphism between vector spaces does not imply that two real vector spaces of the same dimension are isomorphic with respect to the properties that come from a norm.

    The two problems are essentially the same. The numerical calculations would be the same.

    For example, if you have a physical situation where a vector is described as "the force of the ladder against the wall", there isn't any reference to a basis until you impose a coordinate system on the problem. For example "The force of the ladder against the wall" isn't necessarily "in the x-direction" or "in the y-direction".

    Futhermore, a "coordinate system" is a distinct concept from the concept of representing a vector in terms of basis vectors. For example, the polar coordinate representation of a vector involves two numbers, but they are not coefficients of basis vectors for ##\mathbb{R}^2##. The representation of vectors in a form like (0,1,0) is a particular coordinate system for vectors, but not the only possible coordinate system.

    The general concept of a "coordinate system", permits the same thing to be represented by two different sets of coordinates. For example, in ##\mathbb{R}^2## we can take the 3 vectors { (1,0), (0,1),(1,1) } and represent the vector (2,2) as 2(1,0) + 2(0,1) + 0(1,1) or as 0(1,0) + 0(0,1) + 2(1,1). So we can give (2,2) the coordinates (2,2,0) or (0,0,2). This is an inconvenient coordinate system for most purposes, but it does satisfy the definition of a coordinate system.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Existence of basis for P_2 with no polynomial of degree 1
  1. Existence of basis (Replies: 9)

  2. Basis of polynomial (Replies: 2)

Loading...