cragar said:
ok so I should think of 2^{\aleph_0} as the set of reals. that's pretty clear.
Well no, that's the opposite of what micromass said.
When you look at 2^{\aleph_0}, you should say to yourself: "2 is a cardinal. And \aleph_0 is a cardinal. Therefore we are doing cardinal exponentiation. The answer is whatever cardinal 2^{\aleph_0} happens to be. [In the presence of the Continuum Hypothesis that's \aleph_1; without CH, we have no idea what 2^{\aleph_0} is.]
If you wanted the set of binary sequences (which is essentially identical to the set of reals), you would denote that as 2^\mathbb{N}.
And if you specifically wanted to denote the set of functions from \aleph_0 to 2, you would still write 2^{\aleph_0}, but then you would have to
add a note to indicate that you mean the set of functions and not the cardinal. Because micromass certainly convinced me that mathematicians would implicitly apply the convention he described.
Now, my question is regarding the logical status of what in programming would be called
operator overloading. An arithmetic operator, namely set exponentiation, has different semantics depending on the data type of the arguments.
My question is, is that legal in terms of set theory? In other words, how would a set theorist (or perhaps logician) convince me that we can still do cardinal exponentiation properly starting from ZFC? I thought that when you define an operator, you can't change its meaning. It seems to me that if your operators can become variables, you are out of the realm of standard logic. I know computer scientists have been thinking about these kinds of things but I'm not familiar with how operator overloading is handled by the theorists.
So: What is the logical status of operator overloading in set theory?