Algr said:
What I meant about an infinite number of zeros is 0 * infinity, or {0+0+0+0+...}. These values are undefined.
The expression (0+0+0+0+...) is usually used to denote a particular infinite sum, whose value is zero. That is not the same thing as 0 * infinity (or 0 * +infinity), which are undefined expressions in the projective and extended reals, respectively.
For example, start with a value of 5. Divide it by x. As x becomes larger, 5/x becomes smaller. So it stands to reason that when x becomes infinite, 5/inf = 0.
Does it really? Does it make sense for "x to become infinite"? If so, would "5/x" still make sense? If so, would "5/x" really be equal to zero? Is there only one infinite value, so that you can name it with "inf"?
None of these things are automatic, and I can name specific examples of numeric structures that demonstrate different behaviors. e.g. in the reals, it doesn't make sense for x to be infinite (but it does make sense to ask for the limit as x approaches +infinity). In the projective reals, there is only one infinite value, and 5/inf = 0. But in the hyperreals, there are many infinite values, and 5/x is never zero, even when x is infinite.
It also seems logical that if you take your divided parts and put them back together, you'd get your original 5 back.
Divided parts? Who said anything about divided parts?
What I understand about the calculus concept of "limits" is that it seems to be based on the _assumption_ that .333...=1/3 and that other infinite sequences add up this way.
Firstly, calculus is usually presented in terms of the real numbers -- so decimal notation has absolutely nothing to do with the foundations of calculus.
Secondly, your criticism has no force; it has two critical flaws:
(1) Insisting that all knowledge be justifiable in terms of more fundamental knowledge is known to be folly -- it's called the "infinite regress problem"
(2) Much of mathematics gains its applicability from its well-defined (but abstract) definition -- the very fact that you implied that you are working with decimal and rational numbers means that all of their defined and derived properties are applicable.
To state (2) differently -- if you insist that 0.333... and 1/3 are different, then you cannot possibly be using those symbols according to their usual meaning, which is:
. 0.333... denotes the decimal number with 0's in all places to the left of the point and 3's in all places to the right of the point
. 1 / 3 denotes the (obvious) rational number.
Conversely, if you are using those symbols according to their usual meaning, then we know that they are denoting equal numbers.
What you are describing here is Zeno's paradox, which most philosophers do NOT consider solved.
The only thing unsolved about Zeno's paradox is just precisely what it was that Zeno really meant.