Continuity of functions from ℝ→ℝ[sup]2[/sup]

Jolb
Messages
417
Reaction score
29
I was thinking of a pathological function that, according to my intuitive ideas, would be discontinuous, but it actually satisfies a certain kind of continuity.

First I claim that any element x∈[0,1) can be expressed in its decimal [or other base] expansion as
x=0.d1d2d3...
Where each di is an element from the set {0,1,2,3,4,5,6,7,8,9} [with easy generalizations to any other base].

Let me define a function

f:[0,1)→[0,1)×[0,1)
x=0.d1d2d3d4d5d6...→(0.d1d3d5..., 0.d2d4d6...)

Now I'm just really curious about this function. Such a mapping must "jump around" to my mind. It has no derivative of course, but I think it's worse than the famous nondifferentiable ℝ→ℝ functions like the Wierstrauss function since it isn't even composed of continuous functions.

But if we take a typical Euclidean norm for points in [0,1)×[0,1),
|(x1,y1)-(x2,y2)| = [(x1-x2)2+(y1-y2)2]1/2

then we can use the ε-δ definition and say:
For all x, x'∈[0,1)
|x-x'|=(x-x')<10-n ⇒ |f(x)-f(x')|<√2 * 10-n/2.
So it is continuous.

Is this right? It is similar to the Weierstrauss function in that it is nondifferentiable but continuous, but it seems like its continuity must be of a different sort. Is there any classification for this kind of function, or are there any interesting rules of thumb for understanding these pathological functions? Is this particular function of any use, perhaps since it is idiosyncratically continuous?
 
Physics news on Phys.org
It's just straight up continuous - if you change the number by only a little bit (the decimal places far to the right) then the output only changes a little. There's nothing terribly surprising there.

You do have to be careful about the construction because numbers have more than one decimal representation. For example does 1.00000 map to (1,0) or does .9999... map to (.99999,.99999).
 
Office_Shredder said:
It's just straight up continuous - if you change the number by only a little bit (the decimal places far to the right) then the output only changes a little. There's nothing terribly surprising there.
But you have to admit it's weird! I guess I didn't make it as clear in my last post: the reason I find its continuity surprising is because continuous functions tend to trace out "connected paths"--whereas this seems to hop around pretty wildly. Do you see what I'm getting at?

You do have to be careful about the construction because numbers have more than one decimal representation. For example does 1.00000 map to (1,0) or does .9999... map to (.99999,.99999).
I realized this is a problem so that's why I wrote my intervals as half-open intervals [0,1). 1 is strictly not included in that interval so .999... shouldn't be either, right? I guess if you wanted closed intervals you could make it unambiguous by always choosing the infinite string of nines representation for the integers instead of the normal representation. [This is mathematician stuff that I really find annoying, btw.]
 
I'm not convinced the function is continuous after all.

Sadly, making the interval half-open doesn't resolve the definitional problem because ambiguous decimal expansions occur at every 10-adic rational. So you have to choose a convention (e.g. always use the expansion with 9s if available).

But, if we do that, the function doesn't appear to be continuous to me. For example, at 0.119999... = 0.12, the function takes the value (0.2, 0.2) under the convention I proposed above. But if you move ε to the right, you get a number like 0.1200000000000000000 + ε, which maps to (0.1 + f1(ε), 0.2 + f2(ε)), which is not close to (0.2, 0.2). If you adopt the opposite convention (always using the expansion with 0s), then the same problem occurs if you move ε to the left.

So I don't think it's continuous. But I'm open to the possibility that I either misunderstood the definition or screwed up somewhere...
 
That is very interesting. Your counterexample is very nice. I appreciate the effort you put into that.

However I find it curious that intuitive concepts like "continuity" should rely on notational conventions. Would I be on the wrong track if I said that these issues of notational conventions determining continuity do not happen if we are dealing with mappings between spaces of the same dimension?
 
Last edited:
It's not that continuity relies on a notational convention. Continuity has a very precise meaning. It's just that decimal expansion is in some sense not a perfect way to describe numbers.

[Below is a bunch of related technical nonsense. Feel free to skip it.]

Let D=\{0,1,2,3,4,5,6,7,8,9\}, the set of digits. Let X = D^{\mathbb N}, the set of decimal sequences. The map f:X\to X^2 which takes (d_1,d_2,d_3,d_4,d_5,d_6,...) to f(d_1,d_2,d_3,d_4,d_5,d_6,...)=((d_1,d_3,d_5,...), (d_2,d_4,d_6,...)) is a homeomorphism, i.e. it's one-to-one, onto, and continuous (with continuous inverse).

So far, nothing about notational conventions.

Now, you wanted to think of f as a map [0,1]\to[0,1]^2.

The space X is definitely related to [0,1]. Indeed, the map e:X\to[0,1] which takes (d_1,d_2,d_3,d_4,d_5,d_6,...) to e(d_1,d_2,d_3,d_4,d_5,d_6,...)=\sum_{i=1}^\infty \frac{d_i}{10^i} is continuous and onto. What that means is that [0,1] can be thought of as X, but where we've "glued together" and two strings that represent the same number (in a decimal expansion sense).
 
Hmmm, I don't know economicsnerd, I'm pretty convinced by eigenperson's counterexample that there are plenty of discontinuities that arise from being consistent with your representation of points ending with infinite 0's or 9's: e.g always 0.1045 or always 0.10449999... [I'm not quite sure what "10-adic rationals" means, but it seems like they definitely occur at any point such that di=0 for all i>n, dn=dn-1+1]

Any other opinions on this?

If it were true that all such points are discontinuous, then we could get arbitrarily close to any number on [0,1) by choosing a sufficiently long finite string with all the same decimals and then append a copy of the last decimal followed by infinite nines--and voila a discontinuity. So the discontinuities would be dense on [0,1), right?
 
Last edited:
Jolb said:
... then we could get arbitrarily close to any number on [0,1) by choosing a sufficiently long finite string with all the same decimals and then append a copy of the last decimal followed by infinite nines--and voila a discontinuity. So the discontinuities would be dense on [0,1), right?

This is correct.
 
What happens if you define it over the rationals, ie, only allow finite representations?Every rational has a finite representation in some base, and so can be written as a finite power series.

As it is now, it's not even a function.
 
Back
Top