Why Do Hermite Polynomials Form a Basis for P3?

  • Thread starter Thread starter Moonshine
  • Start date Start date
  • Tags Tags
    Basis Polynomials
Moonshine
Messages
31
Reaction score
0
A question in my textbook asks me to show that the first four Hermite polynomials form a basis of P3.

I know how to do the problem, but I don't really understand what is behind the scenes.

Why can we use the coefficients of a polynomial as a column vector? I don't really know how to ask the question (that means I'm probably really lost, haha).

Also, how do you say P3 (like if I was speaking to someone).

What exactly is polynomial space? Can it be visualized? I'm confused.
 
Physics news on Phys.org
I think you're thinking of vector spaces exclusively as sets of columns of numbers. But vectors don't have to be columns of numbers. Vector spaces are sets whose elements behave by certain rules. For example, there's a binary operation on those elements (addition) that carries pairs of elements to a third element. This operation is associative, commutative, and so forth.

Any set that obeys all those rules is a linear space (or vector space, or space). It doesn't have to be a column of numbers. So, if you define addition of polynomials in the expected way, then it turns out that polynomials of a certain degree (or less) form a vector space.

To pick a basis is to choose an invertible function from the vector space of polynomials onto the "ordinary" vector space of tuples of real numbers. When you say, "Why can we use the coefficients of a polynomial as a column vector?" What you're really doing is choosing one mapping among many, carrying polynomials into R^n.

The Hermite polynomials form another choice of basis. They represent a different mapping from the space of polynomials to R^n. However, they have the advantage of being orthogonal, given the choice of a certain inner product.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top