Although most people know the derivative from calculus, you don't need calculus to define the derivative of a polynomial.
Given a polynomial in $P_n(\Bbb R)$, say:
$p(x) = a_0 + a_1x + a_2x^2 + \cdots + a_nx^n$
We define the derivative $p'(x)$ (I prefer to write this as $Dp$ for reasons we shall see later) as:
$Dp(x) = a_1 + 2a_2x + \cdots + na_nx^{n-1}$
If one uses the basis: $B = {1,x,x^2,\dots,x^n}$ for $P_n(\Bbb R^n)$ one can identify (such an identification is called a linear isomorphism) $P_n(\Bbb R)$ with $\Bbb R^n$ like so:
$a_0 + a_1x + a_2x^2 + \cdots + a_nx^n \mapsto (a_0,a_1,a_2,\dots,a_n)$
In this basis (actually "two bases" but the basis for $P_{n-1}(\Bbb R)$ is "just like" the basis for $P_n(\Bbb R)$), $D$ has the nx(n+1) matrix:
$[D]_B = \begin{bmatrix}0&1&0&\cdots&0\\0&0&2&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\0&0&0&\cdots&n \end{bmatrix}$
which makes it clear $D$ is a linear mapping.
We also have the linear mapping: $L: P_n(\Bbb R) \to P_n(\Bbb R)$ given by:
$L(p(x)) = p(a - x)$ for $a \in \Bbb R$.
It may be instructive to see the matrix of $L$ with respect to the basis $B$:
$[L]_B = \begin{bmatrix}1&a&a^2&\cdots&a^n\\0&-1&-2a&\cdots&-na^{n-1}\\0&0&1&\cdots&\frac{n(n-1)}{2}a^{n-2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\0&0&0&\cdots&(-1)^n \end{bmatrix}$
So the mapping you are given is just $L \circ D$ (with $a = 1$):
$(L \circ D)(p(x)) = L(D(p(x)) = L(p'(x)) = p'(1 - x)$.
For the case $n = 3$, we can see $D$ as having the 3x4 matrix:
$\begin{bmatrix}0&1&0&0\\0&0&2&0\\0&0&0&3 \end{bmatrix}$
and $L$ as being the 3x3 matrix (with $a = 1$):
$\begin{bmatrix}1&1&1\\0&-1&-2\\0&0&1\\ \end{bmatrix}$
Then multiplying these two matrices together, we get the 3x4 matrix:
$\begin{bmatrix}0&1&2&3\\0&0&-2&-6\\0&0&0&3 \end{bmatrix}$ which is the matrix for $T$ in the basis $B$.
Now the vector representing the constant polynomial $p(x) = 1$ in the basis $B$ is just: (1,0,0,0), while the vector representing $x$ is (0,1,0,0), the vector representing $x^2$ is (0,0,1,0), and the vector representing $x^3$ is (0,0,0,1). So applying the matrix of $T$ to these vectors, we get
$T(1) \to [T(1,0,0,0)]_B = [0,0,0]_B \to 0$
$T(x) \to [T(0,1,0,0)]_B = [1,0,0]_B \to 1$
$T(x^2) \to [T(0,0,1,0)]_B = [2,-2,0]_B \to 2-x = 2(1-x)$
$T(x^3) \to [T(0,0,0,1)]_B = [3,-6,3]_B \to 3 - 6x + 3x^2 = 3(1 - x^2)$
If, instead, your book had defined:
$T(p(x)) = (p(x))'$
we would have taken $D \circ L$, giving a different 3x4 matrix (using a 4x4 "L").
I find that the use of derivative notation in linear algebra problems involving polynomial spaces is sort of confusing, and obscures the underlying vector algebra: we just have some linear map $D$ on a vector space $V$, to understand what $D$ does, we need only examine what it does to a basis (and once having CHOSEN a basis, we can do everything in matrices relative to that basis).
Or, perhaps more simply put: everything you want to know about a polynomial vector space is given by the coefficients of the polynomials (the $x$ is just "excess baggage" in terms of the "vector-space-ness", although it does come into play when considering the RING structure of this vector space...but that is a topic beyond the scope of most linear algebra courses).