Thanks, yes that was considered a good book in my day (by me). My guess is your self taught course excels what many high school courses offer. I myself was not offered calculus in high school, (and did not study it myself), indeed I did not even learn trig. Upon entering college I was enrolled in a Spivak type calculus course, but unfortunately that was several years before the book by Spivak was available, so every 9am lecture I missed was lost material. About one year later I was out working in a factory, and wondering what had gone amiss.
By the way, many years later, I observed the following proof (much improved by a brilliant colleague) of the basic theorem omitted from Fisher and Ziebur and most other books. Let f be defined on the closed interval [0,1], and consider its values on each of the subintervals, [0, .1], [.1, .2], ...,[.9, 1], of length 1/10. Since you say your book defined the notion of complete ordered field I assume it defined a "least upper bound", i.e. a number that is an upper bound of a given set and yet no smaller number is. the fundamental axiom of the reals is that every non empty subset of reals has a least upper bound, which is either infinity if the set is unbounded above, or is a finite real number if the set has a finite upper bound.
Subdivide the interval [0,1] into smaller intervals of length 1/10. Then choose such a subinterval, say [.4, .5], such that the least upper bound of the values of f on this subinterval is as large as that on any other such subinterval. Hence if there is a maximum value of f on [0,1], it must occur in this subinterval [.4, .5]. Then assign the first decimal place of our desired maximum to be .4.
Now subdivide the interval [.4, .5] further into subintervals of length 1/100, and choose one, say [.43, .44] where again the least upper bound of f is at least as high as that on any other such subinterval. If there is a maximum of f on [0,1], then again it must occur in this subinterval. Thus the first two decimal places of our desired maximum point are .45.
Continuing, we obtain an infinite decimal of form c = .43xxxx..., hence a real number in [0,1], and it is a straightforward exercise in the definition of continuity to show that, if f is continuous at c, then the finite real number f(c) is a maximum for the values of f on the whole interval [0,1]. In particular f is bounded by a finite real number on [0,1] and does in fact attain a finite maximum at some point of [0,1].
Now why would such a simple proof be omitted from a college calculus course? Notice it only requires the knowledge that an infinite decimal does define a real number, plus the definition of continuity, things which supposedly are included in the course. (One easily extends this proof to the case of f defined on any closed bounded interval [a,b], by sending [0,1] to [a,b], by t --> a + t.(b-a), and composing this map with f.)