The text is just observing that for a field, $F[x]$ is Euclidean (with "$d$" function being the degree of a polynomial), and any such Euclidean domain is a principal ideal domain.
Indeed, in any Euclidean domain $D$, given an ideal $I$ of $D$, we have $I = (x)$ for some $x \in D$. The argument is just the same:
Suppose that $x \in I \neq (0)$ is such that $d(x)$ is minimal (if $I = (0)$, choose $x = 0$). Let $y$ be any other element of $I$. I claim $x|y$. Since $D$ is Euclidean, we may write:
$y = qx + r$ where $d(r) < d(x)$, or $r = 0$.
Now $r = y - qx \in I + RI = I + I = I$. Hence, by the minimality of $d(x)$, we must conclude $r = 0$, and $y = qx$.
So every element of $I$ is $ax$, for some $a \in D$, that is: $I = (x)$.
Euclidean domains are very "nice rings". Algebraically, they share many of the desirable features of the integers:
1) Unique factorization
2) No zero divisors
3) Simple ideal structure
4) Primes = irreducibles
5) A division algorithm, allowing for easy computation in quotient rings
6) the Bezout identity for GCD's
One of the remarkable features about POLYNOMIAL rings over a field is that they provide "additional structure" to the field, which turns out to be just enough "extra stuff" to create larger fields out of smaller fields.
In essence, what is happening here, is the abstracting of the way "algebraic numbers" were added to the rational field. This process was set in motion by the observation of the pythagoren identity for a right triangle:
$a^2 + b^2 = c^2$.
If $a = b = 1$ (as is the case with half a "unit square"), then $c$ is a square root of 2:
$1 + 1 = 2 = c^2$, that is:
$c^2 - 2 = 2 - 2 = 0$.
Algebraic expressions involving $\sqrt{2}$ were originally written as "formal sums", the expression:
$a+b\sqrt{2}$ was held to be "non-simplifiable". It was taken as "obvious" that such things referred to an actual "magnitude" and that magnitudes could algebraically manipulated, and were "numbers" (and formed a field, although they were not called as such). Nowadays the modern term is "adjunction", and pre-supposes we can create an even larger structure (in this case, the algebraic CLOSURE of the rational numbers) in which all such expressions live.
Almost everything I have written here was certainly known to someone like Euler (and most of it to Euclid), but they would not have had the same VOCABULARY. It would not be until Evariste Galois investigated solutions of polynomials IN GENERAL, that "shuffling of roots" would come to be seen as the most salient feature involved with SOLVING polynomials (a vestige of this, or perhaps, a hint of things to come, is found when high-school students learn to "rationalize" the denominators of things like:
$\dfrac{x}{\sqrt{2} + \sqrt{3}}$
by multiplying by "the conjugate", which is another root of the SAME polynomial $\sqrt{2} + \sqrt{3}$ satisfies).
It would be some time still, after that, that it would be realized that such "shufflings" captured something ESSENTIAL about "reversible algebraic operations" (solving equations), including our two favorite such operations: addition, and multiplication.
Addition can be seen as the "abstraction" of COUNTING, and multiplication as the "abstraction" of TRANSFORMING, or more precisely:
Abelian groups are $\Bbb Z$-modules and monoids are endomorphisms of some set (groups are the subset of invertible transformations of said set). In this sense, one can honestly say:
"Integers are the single most important algebraic structure there is, to understand anything else, you must know them".
When one learns long-division, one is doing "deep ring theory", although that is hardly apparent at the time. It is little wonder it proves to be so difficult the first time around.