MHB Bases of functions and Matrices with respect to the bases

Kronos1
Messages
5
Reaction score
0
Hi All struggling with concepts involved here

So I have $${P}_{2} = \left\{ a{t}^{2}+bt+c \mid a,b,c\epsilon R\right\}$$ is a real vector space with respect to the usual addition of polynomials and multiplication of a polynomial by a constant.

I need to show that both $$\beta=\left\{1,t,{t}^{2}\right\} and \space \beta^{\prime}=\left\{t,{t}^{2}+t,{t}^{2}+t+1\right\} $$ are bases for $${P}_{2}$$

Then a real polynomial $$p(t)$$ defines the differentiable function
$$p:R\to R, \space x\to p(x) $$
As shown in elementary calculus, differentiation is the linear transformation
$$D:{P}_{2} \to {P}_{2}, \space p\to p^{\prime}=\pd{p}{x}$$
Find the Matrix of $D$ with respect to the bases

(i) [Math]\beta$$ in both the domain and co-domain
(ii) [Math]\beta$$ in the domain and [Math]\beta^{\prime}$$ in the co-domain
(iii) [Math]\beta^{\prime}$$ in the domain and [Math]\beta$$ in the co-domain
(iv) [Math]\beta^{\prime}$$ in both the domain and co-domain

Any help would be appreciated as well as a detailed explanation as to why thanks in advance
 
Physics news on Phys.org
You are asking questions that are explained in any textbook of linear algebra. Yes, it is possible to duplicate a textbook and write a detailed explanation here, but what would justify this effort? Can you convince us that you have no access to textbooks?

See rule #11 http://mathhelpboards.com/rules/ (click on the "Expand" button on top). With respect to proving that the given sets of polynomials are bases, I suggest you show your effort by demonstrating that you know what a basis is. Can you prove that the given sets satisfy at least a part of that definition?

With respect to the matrix of the differentiation operator, its columns are coordinates of the images of basis vectors. See Wikipedia here. In (i), you need to take each vector from $\beta$, differentiate it and find the coordinates of the result in the same basis $\beta$. For example, $D(t^2)=2t$, so the coordinates of $Dt^2$ with respect to $\beta$ are $(0,2,0)$. This is the last column (since $t^2$ is the last basis vector) of $D$ with respect to $\beta$ and $\beta$.
 
You assume that I have not tried a textbook already. coming to this forum is a last resort as I am struggling to understand the definitions in the textbooks.

For all the reading I have done I have only ascertained that the basis is a set of vectors that you can make all other vectors in that space out of via the addition of scalar multiples of the basis vectors. I cannot quite understand how you go about proving that they are basis vectors?
 
Kronos said:
I have only ascertained that the basis is a set of vectors that you can make all other vectors in that space out of via the addition of scalar multiples of the basis vectors. I cannot quite understand how you go about proving that they are basis vectors?
Hmm, I think it should be obvious that every element $at^2+bt+c$ of $P_2$ can be obtained from $1$, $t$ and $t^2$ using addition and multiplication by scalars. As for the second basis, suppose that $a$, $b$ and $c$ are fixed and we want to express $at^2+bt+c$ through elements of $\beta'$. We must have
\[
xt+y(t^2+t)+z(t^2+t+1)=at^2+bt+c
\]
for some coefficients $x$, $y$ an $z$. Equating coefficients of the same powers of $t$, we get the following system of equations.
\[
\left\{
\begin{array}{rcl}
z&=&c\\
x+y+z&=&b\\
y+z&=&a
\end{array}
\right.
\qquad(*)
\]
Moving the first equation down converts it into echelon form, so you can determine if it has solutions.

It is important that the ability to express every vector as a linear combination of vectors from $\beta'$ does not make $\beta'$ a basis: it's only one of the two conditions on a basis. The condition you named says that $\beta'$ has sufficiently many vectors to express all vectors in the space. The second condition says that it does not have too many vectors so that expressing other vectors through $\beta'$ becomes ambiguous. Every vector must be expressed in a unique way. It is sufficient to show this only for the zero vector. We know that $0\cdot t^2+0\cdot t+0=0\cdot t+0(t^2+t)+0(t^2+t+1)$. It remains to show that no other linear combination gives the zero vector. You can determine if this is so by looking at the system (*) above where $a=b=c=0$ and figuring out how many solutions it has.
 
Evgeny.Makarov said:
It is important that the ability to express every vector as a linear combination of vectors from $\beta'$ does not make $\beta'$ a basis: it's only one of the two conditions on a basis.

Actually, since we have a finite-dimensional vector space and since $|\beta| = |\beta'|$ it is sufficient, since:

$\dim(\text{span}(\beta')) \leq |\beta'|$.

The condition you state: "the ability to express every vector as a linear combination of vectors from $\beta'$"

is equivalent to saying: $\text{span}(\beta') = P_2$, which has dimension 3, so "less than" is not an option.
 
Evgeny.Makarov said:
You are asking questions that are explained in any textbook of linear algebra. Yes, it is possible to duplicate a textbook and write a detailed explanation here, but what would justify this effort? Can you convince us that you have no access to textbooks?

See rule #11 http://mathhelpboards.com/rules/ (click on the "Expand" button on top). With respect to proving that the given sets of polynomials are bases, I suggest you show your effort by demonstrating that you know what a basis is. Can you prove that the given sets satisfy at least a part of that definition?

With respect to the matrix of the differentiation operator, its columns are coordinates of the images of basis vectors. See Wikipedia here. In (i), you need to take each vector from $\beta$, differentiate it and find the coordinates of the result in the same basis $\beta$. For example, $D(t^2)=2t$, so the coordinates of $Dt^2$ with respect to $\beta$ are $(0,2,0)$. This is the last column (since $t^2$ is the last basis vector) of $D$ with respect to $\beta$ and $\beta$.
Could you recommend one of these textbooks - I have 3 at the moment and cannot find direction from them.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top