# I Fast growing hierarchy beyond Veblen

1. Aug 9, 2016

### tzimie

I am fascinated by the Fast growing hierarchy. Up to epsilon-naught, and even above, in Veblen hierarchy any value, say, Veblen function Fi(1000, 3) can be theoretically expanded into a very long string of function calls, ultimately repeating the successor function (f0). Anyway these ordinals are countable, and for the finite input we get huge but finite output.

What I don't understand is beyond Veblen: https://en.wikipedia.org/wiki/Ordinal_collapsing_function
The point where I am lost is when the uncountable Omega is injected out of the blue. I don't understand if it is possible, from that point on, to expand the functions indexed with such ordinals with finite input into finite strings. May be this is because I don't understand why the injection of uncountable ordinal does not make the whole structure uncountable.

Please unstuck me, looks like I've just reached the fixed point of my brain )

2. Aug 9, 2016

### stevendaryl

Staff Emeritus
I have not spent a lot of time thinking about it, but the definition of $\psi(\alpha)$ is that it returns the smallest ordinal that cannot be expressed using $0, 1, \omega, \Omega$, ordinal addition, ordinal multiplication and ordinal exponentiation and the function $\psi$ itself. If you have a finite language of constants, operators and function symbols, then you can only express countably many ordinals. Since there are uncountably many countable ordinals, there will always be some countable ordinal that cannot be expressed. So $\psi(\alpha)$ is always a countable ordinal.

3. Aug 9, 2016

### tzimie

But up to Veblen, fast growing hierarchy is built in constructive manner, so you can always decode any value of given function going down thru ordinal hierarchy. So if some ordinal "can't be expressed" it is countable, but you can't expand the notation down to successor function, correct?

4. Aug 9, 2016

### stevendaryl

Staff Emeritus
Definitely not. The simplest example in the Wikipedia article is that $\psi(0) = \epsilon_0$. $\epsilon_0$ can be thought of as an infinitely tall tower of exponentials:

$\epsilon_0 = \omega^{\omega^{\omega^{...}}}$

That cannot be written as a finite expression involving just ordinal addition, multiplication and exponentiation.

5. Aug 9, 2016

### tzimie

Yes, but function in a fast growing hierarchy, indexed with epsilon naught, has finite value (even very big for n=2). This happens because when you decode it for, say, parameter n=2, you use the second element of the fundamental sequence for epsilon naught, which is just omega power omega.After additional steps you get finite expression.

So my question is (with your help now I understand what I actually wanted to ask, lol) what is the fundamental sequence for that "smallest ordinal cannot be expressed..."

6. Aug 9, 2016

### stevendaryl

Staff Emeritus
Well, the fundamental sequence is different for every $\psi(\alpha)$. For $\epsilon_0 = \psi(0)$, the fundamental sequence (I think) is $\omega, \omega^\omega, \omega^{\omega^\omega}, ...$

7. Aug 10, 2016

### tzimie

Yes, right, but what is the fundamental sequence for $\epsilon_\Omega$ ?

8. Aug 10, 2016

### SSequence

I don't know about the specific question, but here are few points:

If we denote Ψ as first uncountable and ε(x) for x-th epsilon number, then ε(Ψ)=Ψ. As is ε(ω-CK)=ω-CK (Church-Kleene).

Here are few points:
a) Normal Functions
A normal function, say f, has three properties:
1. f(0)>0
2. f must be strictly increasing
3. f must be continuous (value at limits must be equal to least upper bound of all previous values)

A function such as:
f(x)=x.ω
is not normal because it doesn't satisfy any of the three properties above. You only to test values for a small initial interval to see that.
f(x)=ω.x
is not normal because it doesn't satisfy property-1.
However, a slight modification of this function is normal (if you work around a bit, you can identify the function enumerating its fixed points):
f(x)=ω+ω.x
If we switch the terms though it doesn't remain normal. For example:
f(x)=ω.x+ω
isn't normal because it fails to satisfy property-3 (continuity).

b) As you might have read that every normal function has fixed points. Suppose we are given some normal function f. I am sure that one can show generally (I don't know much about rigorous arguments) that the first fixed point of f is limit/least-upper-bound of following sequence:
f(0), f(f(0)), f(f(f(0))), ....
If you denote the x-th fixed point as F(x), for example ..... then the limit of above sequence is F(0). Similarly, I think, F(1) can be given as limit of( re-call that f(F(0))=F(0) ):
f(F(0)+1), f(f(F(0)+1)), f(f(f(F(0)+1))), .....

Because Veblen showed that F is normal, F(ω) would be limit of F(0),F(1),F(2),F(3),.....

Note that first fixed point of epsilon numbers can be written as(I am using square brackers to denote sub-scripting):
ε(0), ε[ε(0)], ε[ε[ε(0)]], ε[ε[ε[ε(0)]]], .....
If you note, this is a special case of the point made above.

c) I think probably if you iterate the process of fixed points for a normal function just finite number of times there would be no problem. However, starting from ω you would have to be careful. At limits the function you will get from a "point-wise supremum" will not be normal for sure. For example, I am quite sure it would stay constant for very large intervals. This will have to be fixed on limits.

Probably this point is a little easier to see if you use the function ω+ω.x (or for that matter 1+x) instead of ω^x.

edit:
I guess it might be worth pointing out one more simpler example for (b). For example, we say that ε0 is first fixed point of f(x)=ω^x. One way is to look at the sequence mentioned in (b). For example:
f(0)=1
f(f(0))=f(1)=ω
f(f(f(0)))=f(ω)=ω^ω
f(f(f(f(0))))=f(ω^ω)=(ω^ω)^ω
and so on...

Last edited: Aug 10, 2016
9. Aug 11, 2016

### tzimie

ok, it all make sense, but how can you calculate $f_{\Omega^{\Omega^\Omega}} (2)$ ?
this is huge, but finite number. so to calculate such things you have constructively define fundamental sequences for the uncountable ordinals. Constructively, because otherwise you won't be able to calculate. You can't just say "it is a first ordinals that can't be expressed using..."

Slow growing hierarchy is interesting because it "resists" to the strength of large ordinals
And googolists manage somehow to calculate values.

10. Aug 11, 2016

### SSequence

One relatively minor point that I should probably clarify from my previous post:
It isn't necessary that this sequence be strictly increasing (it could be non-decreasing). For example, consider the case where f(x)=1+x. If we consider the function F, we have:
F(0)=ω
F(1)=ω+1

The sequence mentioned in quote also has all terms equal to ω+1. Because ω+1 is a non-limit there isn't any fundamental sequence for it.

----

Looking at the definition of slow-growing hierarchy, it is elementary enough for me to understand. But I am not seeing any problem.

You might be confusing a few things here. Basically there are two variations on methods of giving description to recursive ordinals(the ones below ωCK). One is method of normal functions introduced by Veblen and the other is one that uses Ω. I have no idea about the second method. I have an informal sense of how the first method works and I was describing that in the previous post (though very early stage of it ... but the early stages give a good idea how one can extend it).

In all probability, Ω is a construct that is used for convenience that could be avoided "in principle". Another thing is that in any way you give a description of recursive ordinal it is not the same as giving an actual recursive system for it (but logicians often give that too).

Here is the order of some of the initial (they are very very large actually in the sense of a fully rigorous description) named ordinals:
first epsilon number
first fixed point of epsilon numbers
fefferman-schutte
ackermann
small veblen
large veblen
bachmann-howard

Of course all of these are below ωCK and so is any ordinal for which you can give a recursive system that resolves it. But when you look at the informal symbols given to them you would find Ω symbol often attached to them. That doesn't mean at all that these are uncountable.

And of course you can attach a reasonably defined set of fundamental sequences (cantor's system for ε0 is a good example) for a recursive ordinal (up till and including that point) and then define a hierarchy like the one you linked. In that case the corresponding functions in hierarchy can be expected to computable.

But I don't think you can even give any definition whatsoever for fundamental sequences for all ordinals up till and including Ω. That would imply that Ω is denumerable and hence violate its definition.

Last edited: Aug 11, 2016
11. Aug 11, 2016

### tzimie

Me too

Exactly. To find the growth rate you need to have an actual recursive system

Yes, it is very easy, but what puzzles me is the right part for the ordinals beyond veblen.
For example things like $g_{\vartheta(\Omega^{\Omega^{\Omega^{\Omega^\omega}}})}(n) \approx \{n,n((0,1)1)2\}$
(the right part of approx).
It means they know the "actual recursive system", but I don't know where it is defined.

12. Aug 11, 2016

### SSequence

I think just a description (in terms of fixed points for example) along with a precise specification of fundamental sequences might suffice. It depends though I think (I am not fully sure). Basically writing the growth function(in functional/program form) for a select few ordinals certainly shouldn't require the recursive system as necessity.

For example, consider ωω. I can define it as first fixed point of f(x)=ω+ω.x (of course ωω is relatively very small and that's why the description is also easy). Now I can define a function g from ω to ωω:
g(n)=fn+1(0) (composition that many times)
This can be chosen as fundamental sequence for ωω itself.

Now I can say that every limit ordinal l<ωω can be written (for some natural number n) as:
l= sum from i=n to i=0 ( g(i).ai )
where each ai<ω. Note that each of the ai's will be unique.

Based on this I can define the fundamental sequence for "l". It would be a bit complicated to write but if you are interested I will write it down.

------

Anyway I think if you pick up intermediate level proof theory book (not sure if there is any intro level book) you are going to find some of this stuff (though probably a lot of other but related stuff too).

Even though I am not acquainted with the specific material in these books at all, I think I have certainly seen (while having a quick glance at one of the books) a description of binary Veblen hierarchy and an actual "well-ordering function" for fefferman-schutte.

Last edited: Aug 11, 2016
13. Aug 12, 2016

### tzimie

Yes, yes, and yes, but you are not listening me )
I CAN work with Veblen up to Fefferman-Shutte, but my question is what happens beyond.
You provide examples of fundamental sequences for relatively small ordinals like $\epsilon_0$, there is no problem with epsilons and small omegas, the problem is when $\Omega$ is injected into that system, then you can have things like $\epsilon_\Omega$

To decode $g_{\epsilon_\Omega}(2)$ you have to know $\epsilon_\Omega[2]$ and I have no idea how it can be done.

14. Aug 12, 2016

### SSequence

As I mentioned before, I am nearly sure that introducing Ω seems to me more like a facility or ease than an absolute necessity.

You are attaching arbitrary symbols to arbitrary recursive ordinals without assuming a very well understood definition for them. The only reason is that people use these attached symbols is because they have agreed upon associating certain rigorous descriptions with them.

You are taking these symbols as having some kind of absolute meaning in themselves. That is indeed not the case. These symbols are used just as mnemonic.

All the machinery of functions and constructs that you will need for one point will have to be updated (sometimes essentially re-written) once again. That's true for any point you will ever reach.

----

As I said before εΩ=Ω (see edit). What are you trying to say? Are you saying you must build a fundamental sequence for Ω to describe a fundamental sequence for a recursive ordinal? Do you need fundamental sequence for Ω2 to describe the fundamental sequence for the corresponding recursive ordinal people have chosen to relate it with (that's why I think the symbol θ or something like that is used to denote it maps back to recursive ordinal). Indeed the answer is a no.

Most likely what happens is something like this. Think about large Veblen as an example. Ultimately when you build its description in all detail (which was done by Veblen) you are going to have a normal function associated with it (the "fastest" one in the complex web of normal functions whose first fixed point will be large Veblen itself). And the fundamental sequence for large Veblen can then just be obtained from that normal function. You can't assume the description before giving it.

edit: I just realised that if you saw ε(ψ) being used somewhere its possible that it might not have been used for eplsilon numbers (meaning ε(x) denoting the x-th fixed point of f(x)=ωx). It seems plausible that same terminology could be used for suggestive notion .... meaning ε(x) denotin x-th fixed point of f(x)=ψx (though surely a note would have to be added). In that case the equation above would then be incorrect as for example ε(0) would be limit of: ψ, ψ2, ψ3, .....

Last edited: Aug 12, 2016
15. Aug 12, 2016

### tzimie

Thank you, agreed.

16. Aug 24, 2016

### Deedlit

I know I'm coming into this thread quite late, but here is my two cents:

Like SSequence said, the introduction of uncountable ordinals into the ordinal notations is for convenience, not for necessity. One way to think about Ω that may be easier to digest is that it is a formal symbol that indicates diagonalization. For example, we have $\psi(\alpha) = \varepsilon_\alpha$ for small $\alpha$, so for $\psi(\Omega)$, we think "take the fixed point", so we set it equal to the ordinal for which $\psi(\alpha) = \alpha$, which is $\zeta_0$. Then $\psi(\Omega+\Omega)$ becomes the fixed point of $f(\alpha) = \psi(\Omega+\alpha)$, $\psi(\Omega \cdot \Omega)$ becomes the fixed point of $f(\alpha = \psi(\Omega \cdot \alpha)$, $\psi(\Omega ^ \Omega)$ becomes the fixed point of $f(\alpha) = \psi(\Omega^\alpha)$, and so on.

Now, you may wonder why they didn't just describe the notation this way, with Ω as a formal symbol. I believe the reason is that this would require a lot of details to define how the notation worked in every little instance, as well as a lot of work proving various details about the ordinal notation system that becomes much easier if we use uncountable cardinals. For example, if we use Ω as a formal symbol, we would need to do some work to show that the recursive system of definitions that we construct always terminates (otherwise it is ill-defined), whereas by defining ψ as a function from ordinals to ordinals, this becomes easy.

The basic idea behind the uncountable ordinal notation, is that the ψ function becomes "stuck" at the fixed point, so that it stays at that value, all the way up until we Ω. So the value at Ω nicely equals exactly the fixed point. Then the function becomes unstuck again, so it goes up again until it reaches the next fixed point, at which it becomes stuck again, until the value Ω*2, and so that has the value of the second fixed point. This continues for all the various expressions involving Ω until Ω^Ω^Ω^... .

tzimie, the fast-growing hierarchy is normally only defined for countable ordinals, so we don't deal with anything like $f_{\Omega^{\Omega^\Omega}}(2)$. The countable ordinals below the Bachmann-Howard ordinal can be expressed as sums of omega-powers and epsilon-numbers. The epsilon-numbers below the Bachmann-Howard ordinal are precisely the values of ψ. So the countable ordinals are constructed from 0 and ω using +, exponentation, and ψ; inside the ψ function we can have Ω, but not outside, or it becomes uncountable.

Fundamental sequences for limit ordinals below the Bachmann-Howard ordinal are more or less straightforward to construct; the Wikipedia page on Ordinal Collapsing Functions gives one such definition. If you have trouble understanding the page, or have any other questions, I would be happy to answer.

17. Aug 26, 2016

### SSequence

Yeah I think you are right about the termination issue (or whatever we call it). I will only talk about the fixed point approach (and not the other one).

Roughly you want to be sure that the structure you are using is big enough so to speak. For example, if we used the entries 0 ≤ x < ωCK for the function ωx and we want to use entries from ωCK ≤ x < ωCK.2 for enumerating the fixed points of ωx then we want to be sure that there are at least ωCK fixed points (just one for showing the existence of ε(0)) that occur in the function ωx(before reaching ωCK). I think generally this property should be necessarily true for a large class of normal functions**. I do not know how to show that generally though (showing that first fixed point is recursive shouldn't be difficult however).

Another problem is that, for example, if you proceed to (ωCK)2 in a standard manner (which should lead to fefferman-schutte -- it seems more common to skip the first ωCK points corresponding to ωx values I think), you would also want to rigorously show the property (in previous paragraph) not only for functions on non-limits but also for functions on the limits (after taking point-wise least upper bounds and removing repetitions). And furthermore it would also have to be shown that the function formed from selecting points on ωCK.x (for the x-th value of the function) also has the property that its first fixed point occurs before ωCK. Because the fixed points of this function are going to be plotted between (ωCK)2 ≤ x < (ωCK)2CK.

That would be on top of all the stuff about normal functions that would have to be proved. If one uses a structure that is too small then obviously the construction becomes faulty (for example, if we use a structure like ωω instead of ωCK above).

----

Another thing I was thinking that one should generally be more careful when skipping brackets for add, multiply, exponent notation when using ordinals (because all the properties don't carry from ordinary arithmetic). Because in ordinary arithmetic setting we often take a^b^c without worrying whether we are talking about (a^b)^c or a^(b^c), since both give the same result.

However, when we write something like ω^ω^2 it seems clear that it is intended to mean ω^(ω^2) instead of (ω^ω)^2=ω^(ω.2)

** All normal functions f where:
f(0) < ωCK
There exists a program so that (for all x<ωCK) for all programs which generate ordering of x, when their index is given as input the output must be index of some program that generates ordering of f(x).

Last edited: Aug 26, 2016
18. Aug 28, 2016

### Deedlit

Well, the purpose of using a big ordinal like ω1CK is to have some of these details handled for you. At least, that is how I have seen the "big ordinals" used; I haven't seen a "normal function method" that uses big ordinals like ω1CK to stratify the levels of the Veblen hierarchy, so I guess I'm not really familiar with what you are talking about.

If we take a normal function $f: \omega_1 \mapsto \omega_1$ (such as $f(x) = \omega^x$), then it is not too hard to prove that the derivative f', the enumeration function of the fixed points of f, is also a normal function from $\omega_1$ to $\omega_1$. (so in particular there are $\omega_1$ fixed points) Therefore we can keep taking derivatives; we can define fa to be the a'th derivative of f (taking intersections at limit ordinals; one can prove this is still $\omega_1$ to $\omega_1$), and Veblen showed that the enumeration function of the set {a|fa(0) = a} is still a normal function; and so it goes down the rabbit hole.

Nitpick: (a^b)^c is not generally equal to a^(b^c), whether you are talking about positive reals or ordinals.

19. Aug 28, 2016

### SSequence

Yeah this was basically what I was saying. Using a smaller structure (like ωCK) might mean that you have to show a lot of extra details (to what extent I am not sure ... maybe quite a bit actually) to be rigorous (even if the underlying construction is correct). But one benefit is that you show automatically that the reached ordinal is recursive (however, it may not be worth it from any practical perspective if all the details nearly amount to generating the well-ordering function itself).

By using a suitably bigger structure (like ω1 ... I am assuming it means first uncountable) for fixed points it seems that the idea is to reduce the work to show that the point reached is well-defined. Though I don't know how logicians show that the given ordinal that they reached is recursive (probably from some kind of equivalence to a known recursive system?). Because that definitely doesn't follow automatically from the construction. It would have to be shown.

So I guess it doesn't apply to say, natural numbers. I was thinking, for some reason, that it would. What I had in mind was that when we write something like ω^ω^ω, we already have a certain precedence in mind.

Last edited: Aug 28, 2016
20. Aug 28, 2016

### SSequence

I just noted that in my first post in this thread (towards the end) I wrongly wrote (ω^ω)^ω for ω^ω^ω --or alternatively ω^(ω^ω). (ω^ω)^ω should just be equal to ω^(ω^2).

21. Aug 29, 2016

### tzimie

Thank you all

22. Dec 11, 2017

### jacquesb

Is the condition 1 really necessary ? I mean, if f(0) = 0, couldn't we consider that the limit/least upper bound of 0, f(0), f(f(0)), ... is 0 which is the first fixed point of f ?

At first sight it seems to me that the value at limit and the least upper bound are both equal to $\omega.\omega+\omega$, aren't they ?

Last edited: Dec 11, 2017
23. Dec 11, 2017

### SSequence

Yes of course, if $f(0)=0$ then by definition $0$ is a fixed point. The definition of fixed point being the points/ordinals $x$ where $f(x)=x$.

It doesn't seem like the "first" condition is really vital (but in what sense?). However, one can imagine that as we start to go higher the $0$ in beginning would just become a bit redundant and the definitions of various sequences or functions would have to be adjusted (probably slightly more complicated).

For example, we can just take a function $f$(say from $\Omega$ to $\Omega$) such as:
$f(x)=\omega \cdot x$
This function fails the first property. However, it satisfies the second and third properties.

Its fixed points are enumerated by the function $g$ defined as:
$g(x)=\omega^\omega \cdot x$
This function once again fails the first property but satisfies the second and third properties.

Not much seems to change except the point I mentioned above.

Take the function that was mentioned:
$h(x)=\omega \cdot x + \omega$

Suppose we want to find:
$sup \{ h^n(0) : n \in \mathbb{N}^+ \}$
We have:
$h(0)=\omega.0+\omega=\omega$
$h^2(0)=h(h(0))=f(\omega)=\omega \cdot (\omega) + \omega=\omega^2+\omega$
$h^3(0)=h(h^2(0))=f(\omega^2+\omega)=\omega.(\omega^2+\omega)+\omega=(\omega^3+\omega^2)+\omega=\omega^3+\omega^2+\omega$
and so on.....

More formally we could use (ordinary) induction to describe general form of $h^n(0)$. Anyway it is definitively clear from above that:
$sup \{ h^n(0) : n \in \mathbb{N}^+ \} = \omega^\omega$

============
On the other hand, we can argue that $h(x)=\omega \cdot x + \omega$ has no fixed points. I will leave the details (because they should be simple enough) but the basic reasoning is not difficult to see. For example we can write:
$h(x)=f(x) + \omega$
where:
$f(x)=\omega \cdot x$ (as above)
The function enumerating its fixed points (as described above) is:
$g(x)=\omega^\omega \cdot x$

So for example we get:
$h(g(1))=f(g(1)) + \omega=g(1)+\omega \ne g(1)$
Hence $g(1)$ is not a fixed point of $h$. Of course the same holds for all the other values in the range of $g$. For values that are not in range of $g$ we have $h(x)>x$ anyway (shouldn't be too difficult to see why). Hence $h$ has no fixed points.

============

In short, the equality of nested compositions (with a fixed point) should be taken with some care (namely the function should be normal .... even then I described a certain(minor?) peculiarity in post#10 in this thread).

For a proof, see reference(1) (mentioned towards the end of post#2 in the other thread you made) .... you don't necessarily have to read all of it.

Last edited: Dec 11, 2017
24. Dec 11, 2017

### SSequence

.......
Sorry not a good day today (not sure what I was thinking with this whole post). At some point, while writing the previous post, I mixed up the meaning of $x \cdot y$ and $y \cdot x$ for some reason.

Last edited: Dec 11, 2017