# [Linear Algebra] Help with Linear Transformation exercises

• iJake
In summary: And then if you antidifferentiate that, you get back to ##x^2##. So you can combine these two operations and get the original function. But if you do it the other way around, you get something different. So these are linear transformations, but they are not inverses of each other.In summary, as you showed, T is a linear transformation. To find Ker(T), you need to find all polynomials f such that T(f) = 0. To find Im(T), you need to consider all polynomials in the range of T, which are polynomials of degree n + 1.

## Homework Statement

1.
(a) Prove that the following is a linear transformation:
##\text{T} : \mathbb k[X]_n \rightarrow \mathbb k[X]_{n+1}##

##\text{T}(a_0 + a_1X + \ldots + a_nX^n) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}##

##\text{Find}## ##\text{Ker}(T)## and ##\text{Im}(T)##

(b) If ##\text{D} : \mathbb R[X]_{n+1} \rightarrow \mathbb R[X]_{n}##
##D(p) = p'##
and ##T : \mathbb R[X]_{n} \rightarrow \mathbb R[X]_{n+1}## is the transformation from part (a), prove that

##D \circ T = \text{id}## but that ##T \circ D \neq \text{id}##

---

## The Attempt at a Solution

1a)

##T(a_0 + a_1X + \ldots + a_nX^n + \ldots + b_0 + b_1X + \ldots + b_nX^n) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1} + \ldots + b_0X + \frac{b_1}{2}X^2 + \ldots + \frac{b_n}{n+1} = T(a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}) + T(b_0X + \frac{b_1}{2}X^2 + \ldots + \frac{b_n}{n+1})##

##c \cdot (a_0 + a_1X + \ldots + a_nX^n) = ca_0 + ca_1X + \ldots + ca_nX^n##

##T(c \cdot (a_0 + a_1X + \ldots + a_nX^n) = T(ca_0 + ca_1X + \ldots + ca_nX^n) = (c \cdot a_0X) + (c \cdot \frac{a_1}{2}X^2) + \ldots + (c \cdot \frac{a_n}{n+1}) = c \cdot T(a_0 + a_1X + \ldots + a_nX^n)##

I conclude that ##T## is a linear transformation.

However, I'm not really sure how to find ##\text{Ker}(T)## and ##\text{Im}(T)## . For ##\text{Ker}(T)## for example, would it simply be something along the lines of ##T | a_n = 0 \forall a \in \mathbb k## ? Forgive me if this is a foolish question.

Part B has me confused, but mostly because I don't know how to evaluate ##D##. How does ##D(p)## relate to the form of the linear transformation I was given in part (a)?

Thank you Physics Forums for any help.

Last edited:
Your post is quite long. We would prefer that you posted a single problem per thread, especially since both of these are long.
iJake said:

## Homework Statement

1.
(a) Prove that the following is a linear transformation:
##\text{T} : \mathbb k[X]_n \rightarrow \mathbb k[X]_{n+1}##
What does your notation ##\mathbb k[X]_n## mean?
iJake said:
##\text{T}(a_0 + a_1X + \ldots + a_nX^n) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}##
This is not clear. From the context below, it appears that T maps a polynomial of degree n to a polynomial of degree n + 1. If so, the above should be written as ##\text{T}(a_0 + a_1X + \ldots + a_nX^n) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n + 1}X^{n + 1} + \frac{a_n}{n+1}##
iJake said:
##\text{Find}## ##\text{Ker}(T)## and ##\text{Im}(T)##

(b) If ##\text{D} : \mathbb R[X]_{n+1} \rightarrow \mathbb R[X]_{n}##
##D(p) = p'##
and ##T : \mathbb R[X]_{n} \rightarrow \mathbb R[X]_{n+1}## is the transformation from part (a), prove that

##D \circ T = \text{id}## but that ##T \circ D \neq \text{id}##

2.
(a) Let ##V## be an ##\mathbb R##-vector space and ##j : V \rightarrow V## a linear transformation such that ##j \circ j = id_V##. Now, let

##S = \{v \in V : j(v) = v\}## and ##A = \{v \in V : j(v) = -v\}##

Prove that ##S## and ##A## are subspaces and that ##V = S \oplus A##.

(b) Deduce that the decomposition of the matrices in direct sum from the symmetric and skew-symmetric matrices from part (a) (finding a convenient linear transformation ##j##)

[apologies if that last part is a bit weird sounding, I'm translating from Spanish]

---

## The Attempt at a Solution

1a)

##T(a_0 + a_1X + \ldots + a_nX^n + \ldots + b_0 + b_1X + \ldots + b_nX^n) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1} + \ldots + b_0X + \frac{b_1}{2}X^2 + \ldots + \frac{b_n}{n+1} = T(a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}) + T(b_0X + \frac{b_1}{2}X^2 + \ldots + \frac{b_n}{n+1})##

##c \cdot (a_0 + a_1X + \ldots + a_nX^n) = ca_0 + ca_1X + \ldots + ca_nX^n##

##T(c \cdot (a_0 + a_1X + \ldots + a_nX^n) = T(ca_0 + ca_1X + \ldots + ca_nX^n) = (c \cdot a_0X) + (c \cdot \frac{a_1}{2}X^2) + \ldots + (c \cdot \frac{a_n}{n+1}) = c \cdot T(a_0 + a_1X + \ldots + a_nX^n)##

I conclude that ##T## is a linear transformation.

However, I'm not really sure how to find ##\text{Ker}(T)## and ##\text{Im}(T)## . For ##\text{Ker}(T)## for example, would it simply be something along the lines of ##T | a_n = 0 \forall a \in \mathbb k## ? Forgive me if this is a foolish question.
No, Ker(T) is the set of degree n polynomials f such that T(f) = 0. Think about which polynomials have antiderivatives of 0.
iJake said:
Part B has me confused, but mostly because I don't know how to evaluate ##D##. How does ##D(p)## relate to the form of the linear transformation I was given in part (a)?
What you're doing in the two parts is essentially showing that differentiation and antidifferentiation are linear transformations, and that taking the derivative of an antiderivative of a function gets you back to the original function, but the same is not true if you take the antiderivative of the derivative of some function. In short, these operations aren't truly inverses of each other.

For example, if ##f(x) = x^2##, ##\frac d {dx} x^2 = 2x##, and ##\int 2x dx = x^2 + C##.
Going the other way, ##\int \frac d {dx} x^2 dx = x^2##
iJake said:
As for question 2)

a) The test for ##S## and ##A## being subspaces is fairly trivial so I don't include it. Now, to determine that ##V = S \oplus A## I'm finding it a little trickier. I observe clearly that ##S \cap A = \{0\}## but how do I formalize that and lead into it proving that V is the direct sum of S and A?

b) is also confusing me. I found this and it looks remarkably similar to my problem, but I do not know how to apply it here.

Thank you Physics Forums for any help.

Let's say ##a## and ##b## are those polynomials for short.
iJake said:

## Homework Statement

1.
(a) Prove that the following is a linear transformation:
##\text{T} : \mathbb k[X]_n \rightarrow \mathbb k[X]_{n+1}##

##\text{T}(a_0 + a_1X + \ldots + a_nX^n) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}##

##\text{Find}## ##\text{Ker}(T)## and ##\text{Im}(T)##

(b) If ##\text{D} : \mathbb R[X]_{n+1} \rightarrow \mathbb R[X]_{n}##
##D(p) = p'##
and ##T : \mathbb R[X]_{n} \rightarrow \mathbb R[X]_{n+1}## is the transformation from part (a), prove that

##D \circ T = \text{id}## but that ##T \circ D \neq \text{id}##

2.
(a) Let ##V## be an ##\mathbb R##-vector space and ##j : V \rightarrow V## a linear transformation such that ##j \circ j = id_V##. Now, let

##S = \{v \in V : j(v) = v\}## and ##A = \{v \in V : j(v) = -v\}##

Prove that ##S## and ##A## are subspaces and that ##V = S \oplus A##.

(b) Deduce that the decomposition of the matrices in direct sum from the symmetric and skew-symmetric matrices from part (a) (finding a convenient linear transformation ##j##)

[apologies if that last part is a bit weird sounding, I'm translating from Spanish]

---

## The Attempt at a Solution

1a)

##T(a_0 + a_1X + \ldots + a_nX^n + \ldots + b_0 + b_1X + \ldots + b_nX^n) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1} + \ldots + b_0X + \frac{b_1}{2}X^2 + \ldots + \frac{b_n}{n+1} = T(a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}) + T(b_0X + \frac{b_1}{2}X^2 + \ldots + \frac{b_n}{n+1})##
Some cut and paste errors, e.g. an ##X^{n+1}## is missing at the last term of ##a## and ##b##. You've written ##T(T(a))## and ##T(T(b))## on the RHS. But more severe is, that you already used additivity in the first step. You have to write ##T(a+b)=T(a_0+\ldots +b_0+\ldots)=T((a_0+b_0)+\ldots)## and then apply the definition of the operator, since you only have the formula for already added polynomial. On the RHS you have ##(a_0+b_0)X+\ldots = a_0X+\ldots +b_0X+\ldots = T(a)+T(b)##. This is nit-picking, but the entire exercise is only a question of accuracy.
##c \cdot (a_0 + a_1X + \ldots + a_nX^n) = ca_0 + ca_1X + \ldots + ca_nX^n##

##T(c \cdot (a_0 + a_1X + \ldots + a_nX^n) = T(ca_0 + ca_1X + \ldots + ca_nX^n) = (c \cdot a_0X) + (c \cdot \frac{a_1}{2}X^2) + \ldots + (c \cdot \frac{a_n}{n+1}) = c \cdot T(a_0 + a_1X + \ldots + a_nX^n)##
You could add some steps here, too, but it's correct. A bracket is missing in the first term.
I conclude that ##T## is a linear transformation.
Yes. Integration is linear.
However, I'm not really sure how to find ##\text{Ker}(T)## and ##\text{Im}(T)## . For ##\text{Ker}(T)## for example, would it simply be something along the lines of ##T | a_n = 0 \forall a \in \mathbb k## ? Forgive me if this is a foolish question.
It's not a foolish question, it's a matter of practice. ##\operatorname{ker}(T)=\{\,a\, : \,T(a)=a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}X^{n+1}=0\,\}##. Now when is a polynomial the zero polynomial? Similar for the image. ##\operatorname{im}(T)=\{\,b=b_0+b_1X+\ldots + b_mX^m\, : \,b=T(a)\,\}## and then you can write the conditions of possible ##b_i\,.##
Part B has me confused, but mostly because I don't know how to evaluate ##D##. How does ##D(p)## relate to the form of the linear transformation I was given in part (a)?
##D## is the differentiation and ##T## the integration. You could just start with an example, say ##a= 2+3x+4x^2## and calculate ##D(a)## and ##T(a)## and see what happens to the coefficients ##(2,3,4)##
As for question 2)

a) The test for ##S## and ##A## being subspaces is fairly trivial so I don't include it. Now, to determine that ##V = S \oplus A## I'm finding it a little trickier. I observe clearly that ##S \cap A = \{0\}## but how do I formalize that and lead into it proving that V is the direct sum of S and A?
We have ##j^2=1## which means ##j(j(v))=v##. Now consider ##u=j(v)+v## and ##w=j(v)-v##.
b) is also confusing me. I found this and it looks remarkably similar to my problem, but I do not know how to apply it here.
Symmetric and skew-symmetric indicates, that it must have something to do with matrices ##A## and ##A^\tau##. Now use a similar trick as before and define ##A\pm A^\tau## for an arbitrary matrix ##A##.

I separated the problem into two threads for the sake of tidiness and forum preference.

Ah yes, I apologize, I typo'd the first time and proceeded to paste it throughout the problem. I've written it down properly in my notebook where I attempted to work it out.

Seeing the key point of the exercise, it all looks much more meaningful now, haha. Also, your nitpicking is much appreciated in all of the threads I've made. When I have a little more free time in-between semesters, I've thought about looking into Polya's book How to Solve It and Velleman's How to Prove It, in an effort to refine my mathematical intuition in general and see how best to solve and correctly describe the solutions of problems like these. Do you recommend either or both of those texts?

It's late here so I will reply to this thread tomorrow with my edited solutions, but I feel I've got an idea of the exercise and the correct notation now. A polynomial is the zero polynomial when ##\frac {a_n}{n+1} = 0##. I'm not visualizing the image quite so well. I understand everything you're telling me, but I'm not sure what you mean by the conditions of possible ##b_i##, did you not describe them when you described the image?

Thank you for your assistance, I'll finish this, as well as my other thread tomorrow.

iJake said:
Do you recommend either or both of those texts?
I don't know them. English isn't my native language either. But it doesn't really matter. At this stage, all that really counts is practice - by reading proofs as well as doing exercises. This linear algebra stuff is usually a bit new compared to mathematics at school, so one has to get used to it. If you will have dealt with your hundredth kernel or image, things will likely be much clearer than they appear to you now.
iJake said:
polynomial is the zero polynomial when ann+1=0
A polynomial is the zero polynomial, if all coefficients are zero.
iJake said:
I'm not visualizing the image quite so well.
The image are all those vectors, here polynomials, which are hit by ##T##. So
$$\operatorname{im}(T) = T(\mathbb{k}[X]_n) = \{\,b=b_0+b_1X+\ldots +b_{n+1}X^{n+1}\, : \,\text{ there is a polynomial }a = a_0+a_1X+\ldots +a_nX^n\text{ such that }b=T(a)\,\} \subseteq \mathbb{k}[X]_{n+1}$$
This means we have to consider the equation ##b=T(a)##, i.e. ##b_0+b_1X+\ldots +b_{n+1}X^{n+1}= a_0X+\frac{a_1}{2}X^2+\ldots +\frac{a_{n}}{n+1}X^{n+1}\,##. Now you can compare the two polynomials coefficient by coefficient, pairing the same powers of ##X##. The result are then certain polynomials ##b## which can be written in the form ##a_0X+\frac{a_1}{2}X^2+\ldots +\frac{a_{n}}{n+1}X^{n+1}\,##. One sees already that these cannot be all polynomials ##b##. Which ones are missing?

I apologize, I made a typo when I wrote the problem that may have been repeated throughout our discourse. It states,

##\text{T} : \mathbb k[X]_n \rightarrow \mathbb k[X]_{n+1}##

##\text{T}(a_0 + a_1X + \ldots + a_nX^n) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}X^{n+1}##

I'm not sure which of the polynomials of ##b## are missing. If ##T## is integration, which polynomial ##b## am I not able to integrate?

iJake said:
I apologize, I made a typo when I wrote the problem that may have been repeated throughout our discourse. It states,

##\text{T} : \mathbb k[X]_n \rightarrow \mathbb k[X]_{n+1}##

##\text{T}(a_0 + a_1X + \ldots + a_nX^n) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}X^{n+1}##
Yes, what you wrote was confusing, which led to an incorrect interpretation in what I wrote in post #2.
iJake said:
I'm not sure which of the polynomials of ##b## are missing. If ##T## is integration, which polynomial ##b## am I not able to integrate?
It might be helpful to look at things in a less abstract manner. Suppose that your transformation is given by ##T :\mathbb k[X]_2 \rightarrow \mathbb k[X]_3##, with ##T(a_0 + a_1X + a_2X^2) = a_0X + \frac {a_1} 2X^2 + \frac{a_2} 3 X^3##
Are there polynomials in the set of polynomials of degree 3 or less that aren't in the image of T?

I've sat here puzzling at it for a moment, but I don't see any polynomials that aren't in the image of T. Does it have something to do with the fact that each polynomial in the preimage is elevated one degree by T? I can't see any that wouldn't be in the image, because it's fair game for ##a_i## to be rational, irrational, 0, etc... Sorry if I'm missing something obvious here.

iJake said:
I've sat here puzzling at it for a moment, but I don't see any polynomials that aren't in the image of T.
I do.

iJake said:
Does it have something to do with the fact that each polynomial in the preimage is elevated one degree by T?
Yes. So, in my example, can you think of any polynomials of degree 3 or less that aren't in the image of this transformation?

iJake said:
I can't see any that wouldn't be in the image, because it's fair game for ##a_i## to be rational, irrational, 0, etc...
It's not related to whether ##a_i## is rational, irrational, or whatever.

iJake said:
I've sat here puzzling at it for a moment, but I don't see any polynomials that aren't in the image of T. Does it have something to do with the fact that each polynomial in the preimage is elevated one degree by T? I can't see any that wouldn't be in the image, because it's fair game for ##a_i## to be rational, irrational, 0, etc... Sorry if I'm missing something obvious here.
Far too complicated! The images ##T(a)##, do they have a constant term?

it is well known from calculus that both differentition and integration from 0 to x are linear for all differentiable and all integrable functions respectively. that is what you are asked to prove in the case of polynomials. (0f course you have to correct your misprint by multiplying the last term by X^n+1.)

I apologize for letting this thread slip away! I was away for the weekend and couldn't reply. I'm back now, however.

My notation might be off but I think I've the idea of the solution.

##Ker(T) = \{a: T(a) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}X^{n+1} = 0 \}##
##Ker(T) = \{a : a_n = 0 \forall a \in \mathbb k : T(0) = 0X + 0X^2 + \ldots + 0X^{n+1} = 0\}##

##Im(T) = T(\mathbb k[X]_n) = \{b = b_0 + b_1X + \ldots + b_{n+1}X^{n+1} : \exists a = a_0 + a_1X + \ldots + a_nX^n : b=T(a)\} \subseteq \mathbb k[X]_{n+1}##
##Im(T) = \mathbb k[X]_{n+1}##

I reached this conclusion after finally seeing that there are no constant terms in the image of ##T(a)##.

As for the second part,

##a = a_0 + a_1X + \ldots + a_nX^n##
##T(a) = a_0X + \frac{a_1}{2}X^2 + \ldots + \frac{a_n}{n+1}X^{n+1}##
##D \circ T(a) = D(T(a)) = a_0 + \frac{2a_1}{2}X + \ldots + \frac{(n+1)a_n}{n+1}X^{n} = a_0 + a_1X + \ldots + a_nX^n = id##
whereas
##D(a) = 0 + a_1 + \ldots + (n)a_nX^{n-1}##
##T \circ D(a) = T(D(a)) = 0X + a_1X + \ldots + \frac{(n)a_n}{n}X^{n} \neq id##

## 1. What is a linear transformation?

A linear transformation is a mathematical function that maps a vector space to another vector space while preserving the basic structure of the space, including operations such as addition and scalar multiplication.

## 2. What are some real-world applications of linear algebra and linear transformations?

Linear algebra and linear transformations have many real-world applications, such as image and signal processing, computer graphics, data analysis, machine learning, and physics.

## 3. How do I determine if a transformation is linear?

A transformation is linear if it satisfies two properties: additive and homogeneous. Additivity means that the transformation of the sum of two vectors is equal to the sum of the transformations of the individual vectors. Homogeneity means that scaling a vector and then applying the transformation is the same as applying the transformation and then scaling the result.

## 4. How do I represent a linear transformation in matrix form?

A linear transformation can be represented by a matrix. The columns of the matrix represent the images of the basis vectors of the original vector space. To apply the transformation to a vector, simply multiply the vector by the matrix.

## 5. Can a linear transformation have a negative determinant?

Yes, a linear transformation can have a negative determinant. The determinant of a linear transformation represents the scaling factor of the transformation, and it can be positive, negative, or zero. A negative determinant indicates that the transformation causes a reflection or a flip of the original vector space.