Question Regarding Linear Orders

  • I
  • Thread starter SSequence
  • Start date
  • Tags
    Linear
In summary, the question posed is whether it is possible to find a linear order relation that is uniform but does not contain the segment AB. This question is motivated by the paper "Axiom of Choice and Determinacy" and the need to find a linear order relation whose initial ordinal is ωCK. The author suggests trying all starting positions in the linear order to find the answer.
  • #1
SSequence
552
94
Consider a linear order (on ##\mathbb{N}##). Now it will have a comparison relation associated with it (a function from ##\mathbb{N}^2## to ##\{0,1\}##). Let's denote it as ##\prec##.

I want to define a certain notion, which I will just call "uniformity" (just for brevity). For each ##i \in \mathbb{N}##, we want to define a (linear) order on a set ##S_i \subseteq \mathbb{N}##. Let's denote the corresponding relation for this order as ##\prec_i## (a function from ##(S_i)^2## to ##\{0,1\}##). The set ##S_i## is defined as:
##S_i=\{x\in \mathbb{N} \,\,|\,\, i \preceq x\}##
The function ##\prec_i## agrees with the function ##\prec## on all points of its domain (so, in this sense, it is just a restriction of the function ##\prec##). Intuitively, it is simply the order obtained by removing all element that are less than ##i## in the original ordering.

Now every linear order has a well-ordered initial segment, and hence the order-type of that initial segment being an ordinal. I will call this ordinal "initial ordinal" for the rest of this post. Now we want define the notion of uniformity for the order corresponding to ##\prec##. The conditions for it are:
(1) The initial ordinal for all ##\prec_i## (##i \in \mathbb{N}##) must be equal to the same ordinal say ##\alpha##.
(2) For any ##i \in \mathbb{N}##, ##\prec_i## can never represent a well-order.

For any linear order we say that it contains a certain segment of order-type ##O## when we find some initial sections and final sections such that the whole order can visualised as:
initial_segment, O, final_segment

For any (countable) ordinal ##\alpha## define ##\alpha^*## as order-type that is obtained by reversing a well-order relation (of ##\mathbb{N}##) with order-type ##\alpha##. That is in the well-order relation of ##\alpha## we switch ##1##'s with ##0##'s and ##0##'s with ##1##'s. For example, here are examples of two linear orders (on ##\mathbb{N}##) with order-type ##\omega^*## and ##(\omega \cdot 2)^*##:
##...,6,5,4,3,2,1,0##
##...,13,11,9,7,5,3,1 \,\,## ##\,\,...,12,10,8,6,4,2,0##

Now define a certain order-type as AB (meaning informally that B is placed to the right of A):
(A) ##\alpha##
(B) ##\omega^*##
So by AB I essentially mean placing ##\omega^*## to the right of ##\alpha##. For example, if I had written ##\omega## instead of ##\omega^*## in B, then AB would simply mean the order-type ##\alpha+\omega##.

Now here is the question. For some given value of ##\alpha##, can we find example of any linear order relation ##\prec## that is uniform but doesn't contain the segment AB described above in it.

=================

Here are few examples that I think will help clarify the question a bit (since I didn't describe it well formally). If we define ##Z## to be the order-type of integers (under their usual comparison relation) and ##N=\omega## to be the order-type of natural numbers (under their usual comparison relation), then consider the following order-types:
(1) ##N,Z,Z,Z,Z,Z,...##
(2) ##Z,Z,Z,Z,Z,Z,...##
(3) ##...,Z,Z,Z,Z,Z,Z,Z,Z,Z...##

Any linear-order whose order-type is (1) would be uniform. No matter which "starting position" we pick, we will exactly encounter ##\omega## many elements before the well-ordered part ends. Same can be said for (2) and (3). Further we observe that the second condition is also satisfied for (1),(2),(3).

Here is an example of first condition for uniformity being satisfied but second condition failing:
(4) ##Z,Z,Z,Z,Z,Z,...,N##

Here is an example of first condition for uniformity failing:
(5) ##N,Z,N,Z,N,Z,N,Z,N,...##
The order-type (5) isn't uniform. For example, if we pick our "starting position" from within the "very first" N, we will encounter ##\omega## many elements before the well-ordered part ends. However, if we pick our starting position from within any of the Z's we will observe that the initial ordinal turns out to be ##\omega \cdot 2##. We must have the same initial ordinal for every starting position.

Now here is an observation about (1),(2) and (3). We could write them as:
(1) ##\omega,\omega^*,\omega,Z,Z,Z,Z,...##
(2) ##\omega^*,\omega,\omega^*,\omega,Z,Z,Z,Z,...##
(3) ##...,Z,Z,Z,Z,\omega^*,\omega,\omega^*,\omega,Z,Z,Z...##

We can observe that if we take the initial_segment to be [empty], [##\omega^*##],[ ##...,Z,Z,Z,Z,\omega^*##] respectively in (1),(2),(3) then right after it we have a segment of the form ##\omega,\omega^*##. Since, in our case, we noted ##\alpha=\omega## (the initial ordinal in uniformity condition), we note that we can write it as: ##\alpha,\omega^*##.
Main Question Ends at this Point

============================================================

Motivation: Here is some context for the question. I will describe it very briefly. It is related to the paper https://arxiv.org/abs/math/9808093 (in particular, the relevant theorems are 2.2 (page-8), 3.2 (page-17). It is also mentioned that, starting from empty tape, it is impossible to halt at ##\omega_{CK}##.

Suppose we consider all decidable/recursive sets. Consider the indexes of these functions/sets. We first determine these first and then write the indexes in an increasing order. Then we sort out and keep those indexes whose corresponding function encodes a linear order. This can be easily checked. Now call the set the remaining set ##C \subset \mathbb{N}##.

Further by theorem-2.2 we will be able to sort out all those which represent a well-order. But because we can't halt at ##\omega_{CK}## (starting from empty tape), we have to assume that there will be some linear orders relations (in the set ##C##) whose initial ordinal will be ##\omega_{CK}##. If we don't have that we could halt at ##\omega_{CK}## because the set ##C## will become empty right at that point (and a flag cell can signal this).

If we simply check all the functions in the usual manner, the set ##C## will become empty ##\omega_{CK} \cdot \omega## probably(?). However, what I was thinking was that to check whether function representing a linear-order represents a well-order or not, we could try to be a bit more efficient. Normally we find the smallest position, then the next smallest one and so on.

However, what if we tried all starting positions in our linear order? Remember that the starting positions are described by natural numbers so it isn't difficult to do so. However, we will have to be a bit careful. We don't want to compute the initial ordinal of first starting position, then the second one and so on (we could get unlucky and the initial ordinal for first position could be ##\omega_{CK}##). Instead we complete first stage for all positions. Then we complete the second stage for all positions (and so on). This way if any single starting position has an initial ordinal less than ##\omega_{CK}## it would be detected. And actually, in that case we would be able to figure out whether linear order is well-order or not in recursive length of time.

There is a possiblity that perhaps some peculiar machine limitation (since ##\omega## tape machines can have this sometimes I think) that prevents this construction (probably needs to be looked more closely). However, it seems to me that there is a more important question that probably should be asked first I feel. That is that based on last paragraph we must assume the existence of recursive sets which encode a linear order whose initial ordinal is equal to ##\omega_{CK}## for all starting positions. But it seems the linear-order (in its order-type), at some point, can't have a ##\omega^*## right after the ##\omega_{CK}## part (this would require some further justification, but I will skip it for brevity).

So the question I stated is related to this (in the more general sense so that the condition of being recursive etc. are removed), and asks for example of any uniform linear-order whose initial ordinal is equal (say ##\alpha##) for all starting positions and further it doesn't contain a ##\omega^*## right after ##\alpha##.
 
Physics news on Phys.org
  • #2
I suggest you summarize the question, make it more concise, easier to read.
 
  • #3
Here is the question (while skipping some descriptions):
Every linear order (on some fixed/given set) has a well-ordered initial segment, and hence the order-type of that initial segment being an ordinal. Let's call this ordinal "initial ordinal". We want define the notion of uniformity (just naming it generically) for a linear order:
(a) The initial ordinal must be the same no matter which starting position we use (and remove the elements that are smaller than the starting position). Note that removing all the elements smaller than a given position (while preserving the ordering for all remaining elements) will also lead to a linear order (just on a smaller set).
(b) For any starting position, after removing all elements that are smaller than starting position (while preserving the order for remaining ones), we must not get a well-order.

Secondly I defined the notion of ##\alpha^*## (which is basically the mirror of ##\alpha## in a sense) corresponding to any ordinal ##\alpha##. The question is that for a uniform linear order, can one prove or disprove (via an example) that it always contains a section of the form:
##\beta,\omega^*## (##\beta## immediately followed by ##\omega*##)
within it at some point. Here ##\beta## denotes the (unique) initial ordinal that corresponds to any starting position in the given (uniform) linear order.

Also, the question is divided into three sections (the first two are related to main question). I wrote down a good number of examples in the second section. I think looking at them should probably clarify the question.
 
Last edited:
  • #4
I think the answer to question I asked is negative via the following example:
Let's denote the order-type of rational numbers as ##Q##. Now we can define ##Q_{\omega}## (or ##Q_{\omega^2}## for example) as the order-type obtained by replacing each individual "point" by "##\omega##". A linear order on ##\mathbb{N}## can be constructed (recursively) with this order-type and will have no section within it that will be of the type ##\omega+\omega^*##.
 
Last edited:
  • #5
While the previous post answers the question I originally asked, it seems it doesn't resolve my confusion. It seems to me that perhaps there might be something more that can be said. I thought about this after making the last post, so I don't make any claim of correctness. Nevertheless, I will describe the idea I had (meanwhile I will try to think about it more carefully over many times to see where I might be missing something or making a mistake).

============

First note that we can add another condition to uniformity (described in previous posts):
(c) the linear order should have a minimal element.

If the construction which I describe below (and in the OP) can indeed be implemented (which can be determined by very careful examination) specifically on machines described in the paper then: if it is correct then it seems to contradict theorem-8.8 in the linked paper. For obvious reasons, it would be safe to assume as being exceedingly unlikely that there would be a logical mistake in the proof of that theorem. So I will probably try to check all aspects (after making the post) to see where I am making a mistake.

============

Anyway, here is the point I was thinking. If we look at the context/motivation section of the OP, the main barrier to why the construction I described goes beyond ##\omega_{CK}## time is because of uniform linear orders (formed by decidable/recursive sets) whose initial ordinal is always equal to ##\omega_{CK}##. So one question to ask is whether we can try devising a method that would remove all such orders in less than CK time (while not removing arbitrarily large well-orders described by decidable sets).

Consider any uniform linear order (with initial ordinal ##\omega_{CK}##) whatsoever described by some recursive set. Let's denote its comparison relation by ##less_1:\mathbb{N}^2 \rightarrow \{0,1\}##. Now we wish to form another linear-order defined by the relation ##less_2:\mathbb{N}^2 \rightarrow \{0,1\}##. The idea here is that if the order-type of original order was ##R## then the order-type of this new order will be ##R+\omega##.
The definition for ##less_2## is simply:
##less_2(x,y)=less_1(x/2,y/2)## ----if x and y are both even
##less_2(x,y)=##truth value of ##x<y## ----if x and y are both odd
##less_2(x,y)=1## ----if x is even and y is odd
##less_2(x,y)=0## ----if x is odd and y is even

The new order-type can be seen as:
(order-type R formed with even numbers) ##+## (##1,3,5,7,9,11,13,...##)
The main thing to note here is that if ##less_1## is a recursive function then so will be ##less_2##.

Let's note that in the procedure in the OP a set ##C \in \mathbb{N}##(of indexes) corresponding to all linear orders (formed with decidable sets) was formed in the beginning. We want to append the following one-time procedure right after the formation of set ##C##.

First we wish to identify all those functions (via indexes) in set ##C##, which towards the "very end" contain a section exactly of the form:
##1,3,5,7,9,11,13##
Further we require that ##1## must have not have any predecessor element (note: this can be easily checked by the machine because of the underlying relations being recursive).

We need to consider carefully which kind of functions will be identified. Certainly some (but definitely not all) functions with order-type ##\alpha+\omega## (##\alpha<\omega_{CK}## being a limit) will be identified. But so will many functions that correspond to uniform linear orders AND in the end contain that section. Note that this will still leave all all functions forming ordinals of form ##\omega^2 \cdot x## (##x<\omega_{CK}##).
A couple of points here:
First we won't be identifying a function like:
##1,3,5,7,9,11,13,...\,\,## ##\,\,0,2,4,6,8,10,12,...##
since the required section is in the beginning and not in the end.
Secondly all functions forming ##\omega^2 \cdot x##'s will not be identified because they don't have any "line" towards the end. And thirdly, something like the following will not be identified either (because ##1## has a predecessor):
##2,4,6,8,10,12,...\,\,## ##\,\,0,1,3,5,7,9,11,13,...##

Also, note that in the above we have just identified some functions (via indexes) in the set ##C##. As of yet, we haven't removed anything from ##C## yet. We will first modify all the functions obtained in a fixed/preset manner described below.

Now generically denote a function that we identified from set ##C## as ##less_1:\mathbb{N}^2 \rightarrow \{0,1\}## then we want to form a function ##less_2:\mathbb{N}^2 \rightarrow \{0,1\}## as (need to check):
##less_2(x,y)=less_1(2x,2y)##
The point is that the original function formed an order-type ##R+\omega## and we modified it to form order-type ##R##.

It seems (perhaps a mis-step on my part?) that with this modification we have identified all functions forming limit ordinals (that is, ones of form ##\omega \cdot x##, where ##x<\omega_{CK}##). But more substantially it seems that we also have identified all functions describing uniform linear orders with this modification. Now we use the "program equivalence" function [meaning one that takes two indexes as input returns true iff the two indexes represent equivalent (ordinary) programs] and remove every single index from the set ##C## that corresponds to family of functions we obtained. Note that program equivalence is just an arithmetic function so it is easy for the machines in question.

But there are two main things, it seems, that need to be considered:
(i) What have we actually removed.
(ii) Have we actually left anything in the set ##C##?

(i) It seems that we have removed functions for both (a) all uniform linear orders and (b) all functions describing limit values less than ##\omega_{CK}##. To see (b) first consider an arbitrarily chosen (recursive) well-order for ##\omega^2## as follows (for convenience, I have chosen one based on the usual cantor pairing function):
##0,1,3,6,10,...\,\,\,2,4,7,11,...\,\,\,5,8,12,...\,\,\,9,13,...\,\,\,14,...##
Well how and why instances of all programs forming this function will be removed? The reason is because the following well-order for ##\omega^2+\omega## is also recursive:
##[0,2,6,12,20,...\,\,4,8,14,22,...\,\,10,16,24,...\,\,18,26,...\,\,28,...]+[1,3,5,7,9,11,13,...]##

And the way the procedure I described above is supposed to work is that it will take the function corresponding to well-order relation for ##\omega^2+\omega## above and then it will modify that well-order to exactly get the well-order relation for well-order of ##\omega^2## which I just described above (and then the equivalence function removes all indexes corresponding to given function for ##\omega^2## ... which was chosen arbitrarily).

And, for the same reason, it seems to me that all uniform linear orders in the set ##C## will also be removed by this process (please correct if I have made a mistake here since this is the main point). For example, consider any arbitrary uniform linear-order in set ##C## with order-type ##R##. Now its underlying relation is recursive. But that implies that we can now form a recursive relation which just uses even numbers to form the order-type ##R##(0 replacing 0, 2 replacing 1, 4 replacing 2, ...) and, in the end, adds the section ##1,3,5,7,9,11,...## [note that ##1## will never have any predecessor here as long as we are appending this section to a uniform linear order]. This recursive relation describing ##R+\omega## will be identified in the set ##C##, modified to form the "original" relation of type ##R## and then all programs forming this relation will be removed.

Also, note that whether all non-uniform linear orders are removed or not doesn't matter since they can be handled with construction in OP.

(ii) Now it might seem like we have removed everything in the set ##C## but maybe not? The point is that while we have removed many well-orders in the set ##C##, we do want to keep arbitrarily large (below ##\omega_{CK}##) well-orders in our set, otherwise the construction wouldn't be of any use.

It doesn't seem to me that well-orders for any non-limit elements can be removed (please correct if I have made a mistake). Re-call that we identified the following section in our linear-orders towards the very end (with ##1## not having any predecessor):
##1,3,5,7,9,11,13,15,...##
So, for well-orders, we identified some functions forming values of the form ##\alpha+\omega## (##\alpha## being a limit less than ##\omega_{CK}##). Note that ##\alpha## will have to be a limit because we required that ##1## can't have any predecessor. After that we removed the ##\omega## part to form a well-order for ##\alpha##. It doesn't seem possible to me that we could get ##\alpha## as a non-limit.

So it seems that the construction in the OP should work? (or perhaps more like I made an erroneous step somewhere). One minor detail to add is that in the construction in OP we simulated one stage for all starting positions, second stage for all starting positions and so on. Because we only have well-orders for non-limit elements now we need to add a stipulation that we will ignore start positions that lead to halt in ##< \omega## stages.

P.S.
I could have described the construction in more organised manner I think (since one-half of it I described in OP and the other half in this post), but it's only worth writing it in a fully organised form once it is checked to a reasonable degree of certainty. I will think over it next few days. Due to the simplicity of the basic idea, I think if there is a mistake hopefully I would be able to spot it relatively quickly.
 
Last edited:
  • #6
I think I will put the construction I described in last part of post#1 and in post#5 in a an organised form. I have also thought of a simplification of one aspect of the construction. Of course the advantage to an organised description is that it becomes easier to understand and also easy to spot an error if there is one (and it seems it would have to be a silly one, since the construction is quite short and simple).

But anyway, I will do that in the post that follows this one. Before that, I will first describe the main idea in the theorem (for paper linked in OP) that is relevant to the construction (since the construction I described is just a modification of the procedure in that theorem). I will of course describe things in my own words, while just focusing on aspects that are more relevant to specific discussion in the thread. Hopefully it will make the post (following this one) easier to understand.

theorems 2.2 (page-8)
Given any function from N2N2 to {0,1}{0,1}, suppose we encode it as a function from NN to {0,1}{0,1} in any reasonably obvious way. If this encoded function is given as input then the machine can always decide whether it describes the well-order relation (for a well-order of NN) with some order-type (say XX).

For the purpose of this question, it is important to describe the main idea that is discussed in proof of theorem-2.2. Suppose we are given a function f:N→{0,1}f:N→{0,1}. Now let's denote the comparison relation (from N2N2 to {0,1}{0,1}) induced by this function (after "deciphering" it) is <f<f. We want to know that whether <f<f is a well-order relation or not. First we verify the <f<f is a linear order (on NN). After that, the idea is to proceed in stages. Intuitively, at each stage αα we have stored all the elements that occur in positions <α<α and we want to determine the αα-th minimal element (if one exists). So we just use a set AαAα that changes as the stages proceed. Initially we have A0=ϕA0=ϕ. For convenience, we let Bα=N−AαBα=N−Aα denote the complement of AαAα (note that initially B0=NB0=N).

At beginning some stage αα, suppose the elements of BαBα in strictly increasing order were b0,b1,b2,b3,b4,...b0,b1,b2,b3,b4,... First we set-up a cell which serves as flag (it will be set to 00 initially before we start our processing in each stage). We set up a variable (say vv) which is initially set to b0b0. Furthermore, we have a counter that ranges from i=0i=0 to i<ωi<ω. For each ii we make the comparison bi+1<bibi+1<bi. If this comparison turns out to be false, we do nothing and increase ii by 11. If the comparison bi+1<bibi+1<bi turns out to be true then we check whether bi+1∈Bαbi+1∈Bα. If both conditions in previous sentence turn out to be true, then we do two things: (1) Set the variable vv to bi+1bi+1 (2) simply change the flag cell to 11 and back to 00 again. In the limit, when ωω stages pass, we can simply check the flag. If it's 11 we return false (the given <f<f doesn't describe a well-order). That's because the value 11 in the flag cell (at the end of a stage) implies that it oscillated infinitely often in the limit. If the flag is 00, then suppose the value VV was stored in the variable vv. We set:
Aα+1=Aα∪{V}Aα+1=Aα∪{V}
Bα+1=Bα−{V}Bα+1=Bα−{V}
Note that since values are only ever added to AA and only ever removed from BB, there is an obvious way to form these sets on limit stages. We can halt and return true when we discover that, at some stage XX we have AX=ϕAX=ϕ (or alternatively BX=NBX=N).

The above paragraph is similar to how the procedure is described in the paper. It might be added (in case it isn't clear) that there is a slightly less efficient way to find Aα+1Aα+1 (from AαAα) too. At each stage αα, we are just looking for an element xx with the following property:
∃x∈Bα∀y∈Bα∃x∈Bα∀y∈Bα [x≠y→x<fy][x≠y→x<fy]
When we find this element, it will be removed from AA and added to BB (in preparing for the next stage). If such an element exists for each stage (until we run into a stage where AA becomes empty) then ff encodes a well-order. Otherwise, if such an element doesn't exist at some stage (before AA becomes empty), then ff doesn't encode a well-order.

==============================

A bit of search seems to show that linear-orders based on decidable sets are quite a well-studied topic. At any rate, anything specific from what is mentioned below is not used. However, I thought it was interesting enough to mention (it is certainly relevant).

Another paper that might be of interest:
https://www.jstor.org/stable/1994961

The recursion theory involved is few notches too advanced for me. However, the most relevant part (in context of the topic) is right at the beginning. The author seems to describe certain recursive linear orders (presumably ones defined on NN, that can be described by recursive sets) whose order-type can be written as:
ωCK⋅(1+η)+α=(ωCK⋅+ωCK⋅η)+αωCK⋅(1+η)+α=(ωCK⋅+ωCK⋅η)+α
where α<ωCKα<ωCK and ηη is order-type of rationals in the interval (0,1)(0,1).

Now if someone read the definition for a "uniform linear order" which I posted in main question of post#1 and then in post#3, one could note that:
---- when α≠0α≠0, then the uniformity condition fails
---- when α=0α=0, then the uniformity condition is satisfiedEDIT:
I deleted the post after this one, because I identified the mistake after writing it in an organised way.
 
Last edited:
  • #7
Heh I feel pretty silly about post#5. Here is the issue with post#5 (and the method in post I deleted) ... both of which are basically along very similar lines. The method deletes programs for all limit elements ##<\omega_{CK}##. The problem it causes is that we have to add an extra stipulation of the form (towards the end of post#5):
"One minor detail to add is that in the construction in OP we simulated one stage for all starting positions, second stage for all starting positions and so on. Because we only have well-orders for non-limit elements now we need to add a stipulation that we will ignore start positions that lead to halt in ##<\omega## stages."

This is regarding construction which uses all starting positions and hence goes over ##\omega_{CK}## time for only those linear orders where the initial ordinal is always ##\omega_{CK}##. However, adding this detail has an unintended side-effect (which isn't minor). But when we remove well-orders for all limit elements, we have no choice but to add this stipulation. Now consider an order-type of form ##R+n## say [where ##R## is (recursive) linear-order whose initial ordinals is always ##\omega_{CK}## and ##n<\omega##]. The method will stumble over ##\omega_{CK}## time for all order-types of these forms (due to this extra stipulation).

==============================

But that isn't the reason for bumping this thread. I have thought of a (short) method which avoids all the deletion stuff from post#5. Instead it does something else. It still uses the "simultaneous start states" construction in OP in an essential way. I will post it in a few hours (unless I find some trivial issue). Perhaps I will feel silly about that again (but I suppose that's part of thinking over things). It definitely does feel worth taking a look at it at least though.

P.S. post#6 isn't displaying correctly for some reason (maybe its just me?).
 
  • #8
Now here is a different method which I thought of. Let's consider the case of recursive linear-orders specifically. I will first describe little bit detail for the "simultaneous start positions" construction in OP (basically because that's the central element). Some of it is copy-paste from my deleted post.

For simplicity, let's just focus/restrict our attention to recursive linear-orders. Given a comparison relation ##\prec## for a recursive linear-order: for any ##i \in \mathbb{N}## we can define the relation ##\prec_i## (described in the very beginning of OP). We can convert ##\prec_i## from a linear-order on ##S_i## to one on ##\mathbb{N}## if we wish. The main point is that we can define an "initial ordinal" ##\alpha_i## corresponding to every value ##i## which describes the order-type of the initial well-ordered segment corresponding to ##\prec_i##.

Now let's define (defined for the order ##\prec##):
##\alpha_{min}=min\{ \alpha_i | i \in \mathbb{N} \}##

==============================

Description of Simultaneous Start Procedure
Now let's look at the procedure in first-half of post#6. Suppose we are given a relation ##<_f## which describes a linear-order, but doesn't describe a well-order. The main thing to note is that the initial well-ordered segment for ##<_f## can still be extracted in the same manner.

In fact, if we look at theorem-3.2 --page-17 (for paper linked in OP) it seems to use an r.e. relation whose initial ordinal is ##\omega_{CK}##. I am guessing that the r.e. relation used would be describing a linear-order on a subset of ##\mathbb{N}##. It wouldn't be difficult to convert the relation to be one defined on ##\mathbb{N}## (the relation wouldn't be r.e. anymore, but it shouldn't matter). In fact, this is how the machine gets ##\omega_{CK}## time by discovering the initial well-ordered segment of this relation. The machine spends some extra time to find whether there is a minimal element after this. Because there isn't one, it halts a little time later.The actual procedure find the smallest element, second smallest element, and proceeding in a generic manner finds the ##\alpha##-th smallest element. The very first time the number of stages reach the order-type of well-order encoded by ##<_f##, it halts. If ##<_f## is a linear-order but not a well-order, it just stops when the stage becomes equal to the initial ordinal.However, we can modify the procedure so that it uses all possible starting positions and proceed from there. More precisely, if comparison relation of the linear-order was ##\prec##, then as in post#1, we can define ##\prec_i##:

For each ##i \in \mathbb{N}##, we want to define a (linear) order on a set ##S_i \subseteq \mathbb{N}##. Let's denote the corresponding relation for this order as ##\prec_i## (a function from ##(S_i)^2## to ##\{0,1\}##). The set ##S_i## is defined as:

##S_i=\{x \in \mathbb{N}| i \preceq x\}##

The function ##\prec_i## agrees with the function ##\prec## on all points of its domain (so, in this sense, it is just a restriction of the function ##\prec##). Intuitively, it is simply the order obtained by removing all element that are less than ##i## in the original ordering.
For the purpose of convenience, we might want to convert ##\prec_i## to form a linear-order on ##\mathbb{N}## (instead of ##S_i##), which should be quite easy. However, it doesn't seem to make an essential difference.Note that in the first iteration, we want to complete the first stage for all starting positions. In the second iteration, we want to complete the second stage for all starting positions. In the ##\alpha##-th iteration, we complete the ##\alpha##-th stage for all starting positions. What we do not want to do is pick up one starting position and not stopping until we find its initial ordinal.Now the main thing to ask is whether there is a certain advantage to use all possible starting positions? In a certain sense, the answer is yes. First suppose that ##\prec## describes a well-order for an ordinal of the form ##\alpha=\omega^x## (##x \geq 1##). No matter which starting position we use, the number of iterations before halting would always be ##\alpha##. Now suppose that ##\prec## describes a linear-order (but not a well-order). In that case, the number of iterations before halting will always be equal to ##\alpha_{min}## (the smallest order-type corresponding to some start position for ##\prec##).

In the context of a recursive linear order, what that means is that unless ##\alpha_{min}=\omega_{CK}##, the above procedure will always halt in ##<\omega_{CK}## time (for a recursive linear-order). This is true because it is an easy exercise to show ##\alpha_{min} \leq \omega_{CK}## for any recursive linear-order (I have skipped the proof but let me know if there is any doubt about this). The worst case scenario is ##\alpha_{min} = \omega_{CK}## and this is what the simultaneous start positions procedure can't handle (that is, the procedure will take ##>\omega_{CK}## time to process these linear-orders).
 
Last edited:
  • #9
And now here is what I was thinking. What I am trying to avoid is the issue in post#5 (which I described in post#7). What happened was that removal of every program/index corresponding to certain well-orders [in case of post#5, all limit ordinals ##<\omega_{CK}##] forced us to add an exception to "simultaneous start procedure". And this exception caused the procedure to stumble [above ##\omega_{CK}## in processing time] for certain (recursive) linear orders with ##\alpha_{min}<\omega##, even when we removed all programs/indexes for those that had ##\alpha_{min}=\omega_{CK}##.

Now once again we identify all recursive linear orders (that is, linear orders based on decidable sets). Store the indexes for the corresponding programs in set ##C \subset \mathbb{N}##. Corresponding to every index/program in set ##C##, we modify it in a fixed manner. There are no deletions etc. Only a fixed way to modify every index/program in ##C##.

How we make the modification has to be important. In the OP I defined ##\prec_i## corresponding to the relation ##\prec##. I sometimes call this (inclusive) "right-cut". Since this keeps the element ##i## and all the elements to the "right" of it while removing all elements to the "left". I think what we can do is make a left-cut corresponding to every program in set ##C## (the left-cut being non-inclusive and made for element ##0##). Meaning for any relation ##\prec##, we preserve the ordering for all elements less than ##0##, while removing ##0## and all elements greater than it in the ordering.

Writing the relation as ##<_L##, we can define it on a set ##A_0## as:
##A_0=\{x\in \mathbb{N}|x<_L 0\}##
Once again, the relation ##<_L## will agree with ##\prec## on all points in its domain. We might make ##<_L## to be an order on ##\mathbb{N}## instead of ##A_0## [replacing smallest element in ##A_0## with ##0##, second smallest element in ##A_0## with ##1##, and so on].

Apparently, it seems that this process when applied to every function in ##C## will always keep some programs corresponding to order-type of any recursive well-order [this is what we want, as discussed in beginning of this post]. But for it to work, it also to change every recursive linear-order with ##\alpha_{min}=\omega_{CK}## in such a way that after transformation the resulting linear-order will have ##\alpha_{min}<\omega_{CK}##.

So we apply the simultaneous start procedure after making this change to every index/program in set ##C## ... that is, making a non-inclusive left-cut at ##0##. After this change to every element in ##C##, we start from the smallest index (in ##C##), and proceeding in increasing order. And this time we make no exception as to how the simultaneous start procedure runs (for each individual index) ... as soon as it identifies ##\alpha_{min}## for some relation, it ends the processing for that linear-order immediately [removing the corresponding index from ##C##]. The whole process ends when the set ##C## becomes empty (indicated by a single flag cell).

Let me know if I have made an important mistake in this post.
 
Last edited:
  • #10
The method in above post also has an issue. It doesn't always change ##\alpha_{min}## from ##\omega_{CK}## to ##<\omega_{CK}##.
 
  • #11
I have thought of yet another construction. It is a combination of the previous ideas with one or two new different elements added. However, admittedly, the last two times I wasn't careful enough.

Because I was able to find the issue relatively quickly in both cases [post#5, deleted-post AND then later for post#9], once in about three days and then later in just a day ... so I am going to be much more careful this time before posting.

I am not going to post anything in full detail for now. Rather I will take a week to carefully write-out every step and try to check for any implicit assumptions [and then try to repeat it a number of times].

If even after a week of constant thinking I can't find the issue, then I will post the construction. Otherwise, if I don't make any other post after this then it can be assumed that I have found some issue in the construction and don't think it is worth writing everything in detail here. Also, in that case, it can be understood that the thread has run its due course.
 
  • #12
I think I will try to describe what I was thinking. It is essentially a question since, if it worked, I don't think it's possible that no one would have thought of it.

I have also pinged @Deedlit on MSE so he will perhaps give some input on this (as the PF account doesn't seem to have been logged-in for a long time). What are your thoughts on the description in the post below?

As for the structure of post below, there are two main parts:
(a) I first made a certain assumption. I think the assumption can be made weaker significantly. However, it is important to be very explicit about it. That's because, unfortunately, I have very little knowledge/understanding of the underlying theory [partly because I just haven't studied it]. So I don't know whether the assumption I made [or even the much simpler assumption] is true or false.

I will try to improve my knowledge of my theory to understand whether the assumption holds or not. But honestly, the underlying study required would probably take a long time. I think it is likely that an expert on the specific topic will be able to give a swift answer.

(b) I describe a certain point which is supposed to work IF the assumption in (a) is actually correct. In that case, I would also like to know whether I am making some mistake in this part or not.
 
  • #13
Hi SSequence,

I haven't taken the time yet to read over the whole thread, but let me just address the initial question: what about Q? Is there some condition that eliminates Q from the list of available linear orders?

Edit: Looking further, you seem to have answered the initial question yourself. I'll try to find time to read the rest later.
 
  • #14
You are quite right of course about ##Q##. Another example is:
##\omega^4 +(\omega^2) \cdot \omega^* + \omega^4##
From any start point, the (maximum) initial well-ordered segment has type ##\omega^4##, but there is no section of the form ##\omega^4+\omega^*## (no matter which postion we look from). But anyway, ##Q## (or close variations) was much closer to example I was trying to look for.

I do not think that the whole thread needs to be read. Just read post#6 as this is the most relevant to what follows. It isn't displaying in correct format for me though (I can re-post it if needed). But you can also look at the same theorem from the source (paper linked in post#1).

I will post the observation that I have been thinking (currently) during past week or so in a day or two [it is significantly different from what I have posted till now]. I couldn't get to post it because (i) it was very long (ii) and I was also trying to think about aspects of it [I have made number of modifications from what I thought initially ...]. I will probably post in two or three parts, because it gets a bit long.

P.S.
Also, this is just a brief re-capitulation of the whole thread till now. I basically thought of a (quite basic) modification of procedure described in post#6. The actual procedure discovers the initial well-ordered segment of any recursive linear-order. What I thought of was starting from all possible "positions". The point was that even if the initial well-ordered segment extends till ##\omega_{CK}##, if there is any position from which the linear-order has an initial segment that has length ##<\omega_{CK}## it will be detected in recursive time [note that being unable to detect an arbitrary recursive linear-order (that are non-wellorders) in recursive time (as not being well-orders) is one of the main things preventing a machine (such as for paper in OP) from halting at ##\omega_{CK}##].

But if one looks at paper linked in second-half of post#6, there exist recursive linear-orders which have an initial segment of length ##\omega_{CK}## from any starting position. I looked at a number of changes to the "modified procedure"("simultaneous start procedure") to see whether these kind of linear-orders could be dealt with (in recursive time) [some changes/ideas I posted, and some I didn't post here]. I didn't find any reliable way (that I could see or guarantee to work in all cases).

Nevertheless, this is not directly relevant to what follows (I will post it in a day or two).
 
Last edited:
  • #15
I think I will post the lines along which I have been thinking. One problem is that I have changed my idea too many times in last few days. In particular, there is one step I am quite uncertain about (it seems to be an important one). I couldn't get the time to think about it, so I will post it in a slightly speculative manner as such and clearly mentioning it (in a few days or a week I can probably comment on it better).

Other than that, I have not given reference to a number of results. That's mostly to get to the basic point a little quicker. I have added [ref.] at all such points. So if someone is interested, I will describe the specific reference for those points/results.

Some background below (for the post that follows):

=============================

We assume a generic computational model (ORM/OTM or something equivalent). The supremum of all halting positions (on empty input) will be denoted as ##c##.

[ref if needed for this paragraph] Defining the notion of an ##\alpha##-TM and ##\alpha##-RM.
So for example, to define collection of partial computable ##\alpha##-RM functions:
"For each ##x \in \alpha## either the program halts in less than ##\alpha## time or it loops forever"
So for example, to define collection of computable ##\alpha##-RM functions:
"For each ##x \in \alpha## the program must halt in less than ##\alpha## time"

So we have the following collections of functions from ##\alpha## to ##\alpha##. The first three contain both partial and total functions and the last three only contain total functions:
(1) ##\alpha##-##r.e.## [link]
(2) ##\alpha##-RM partial computable
(3) ##\alpha##-TM partial computable
(4) ##\alpha##-##recursive## [link]
(5) ##\alpha##-RM computable
(6) ##\alpha##-TM computable

(1),(2),(3) define exactly the same class of functions ... and similarly, so do (4),(5),(6). Obviously, we also have (1)=(2)=(3) ##\supset## (4)=(5)=(6).

=============================

Further, for any value ##a \in Ord##, it is known to be true that [ref if needed]:
##sup(a##-##recursive)=sup(a##-##r.e.)##
The left-hand-side means the supremum of order-types for which we have a well-ordering of ##a## [in the collection of functions (4) above]. Sorry I have put in a bit weird way I think. For right-hand-side, as far as I can understand, I think we will consider well-orderings based on some subset of ##a##.

Now define the function ##F: Ord \rightarrow Ord##:
##F(0)=\omega## (say)
##F(x+1)=S(F(x))##
For limit values the function ##F## is sup of all values below it (meaning it is a continuous function).

For any value ##a \in Ord##, ##S(a)=sup(a##-##recursive)##.

=============================

Since I don't know about the underlying theory, I should add clearly where an assumption is made. I will make the assumption that for any value ##a \in Ord## there is no ##a##-##r.e.## linear-order whose order-type is greater than ##S(a)##.

For a lot of values ##a \in Ord##, it is definitely true that there is no ##a##-##recurive## linear-order whose order-type is greater than ##S(a)## (since I could find a definite [ref] for it).

At any rate, it might be possible to work around this assumption perhaps. It seems better to assume it if it is true though.
 
Last edited:
  • #16
The programs we are considering have one input variable. Note that, normally, when we are considering halting positions we either consider input ##0## to the program or a program with no input. We can note that these programs (with one input variable) are enumerated quite easily.

Now mostly I have been thinking along the following lines. What I have been trying to consider is based around the point that the program with same index can be thought of as computing a function on domain ##a## or domain ##b## (where ##a,b \in Ord##). More specifically, if we have an ##a##-partial computable then it can be thought of as from ##a## to ##a##. Similarly, a ##b##-partial computable function can be thought of as from ##b## to ##b##. Now interestingly, it is the same program can be thought of as computing both the functions.

What I have been thinking about is how a program with some specific index changes its "category" as we increase the value ##a \in Ord##.

Let's consider the task of categorizing what a specific program is doing for an arbitrary ##a \in A##. We have the following:
(1) It calculates an ##a##-partial computable function but not a linear-order
(2) It calculates an ##a##-partial computable well-order
(3) It calculates an ##a##-partial computable linear-order that isn't a well-order

Note that the underlying set for orderings in (1), (2) and (3) is assumed to be some subset of ##a##.

=============================

Consider two different values ##a,b \in Ord## so that ##b>a##. The point is that which of the "transitions" can we definitely rule out while moving from ##a## to ##b##. It seems that we can definitely rule out the transitions:
(1) ##\rightarrow## (2)
(1) ##\rightarrow## (3)
(3) ##\rightarrow## (2)

The meaning for ruling out first transition is supposed to be that if we had some program categorised as (1) for ##a##, then it isn't possible (in an a priori sense) that the same program would be categorised (2) for ##b##. And similarly for other transitions that have been mentioned.

To describe some reasoning, if we consider (3) ##\rightarrow## (2), the reason for ruling it out is that if there was an infinite descent containing values (all of whom are less than ##a##) then that infinite descent will never be removed (as long as we are considering the same program) and will carry over for all future values ##b>a##.

P.S.
This isn't complete. I will post the observation a bit later probably. But if there are parts where I have skipped over too much or haven't been clear enough (or perhaps a ref. is needed for a part in previous post), please do mention it.
 
Last edited:
  • #17
Post#15,#16 can be skipped. The idea I was thinking about in that direction doesn't work.

Anyway, I thought of another idea closely related to procedure from post#6. But it is quite different from "simultaneous start" method mentioned in second-half of OP (and also summarized in second half of post#14). I will describe that idea in this post.

First of all a copy-paste of first-half of post#6 in correct format (even though the same point can be understood from the source, since it is central to this modified method ... it doesn't seem a bad idea to include this):
theorems 2.2 (page-8)
Given any function from ##\mathbb{N}^2## to ##\{0,1\}##, suppose we encode it as a function from ##\mathbb{N}## to ##\{0,1\}## in any reasonably obvious way. If this encoded function is given as input then the machine can always decide whether it describes the well-order relation (for a well-order of ##\mathbb{N}##) with some order-type (say ##X##).

For the purpose of this question, it is important to describe the main idea that is discussed in proof of theorem-2.2. Suppose we are given a function ##f:\mathbb{N} \rightarrow \{0,1\}##. Now let's denote the comparison relation (from ##\mathbb{N}^2## to ##\{0,1\}##) induced by this function (after "deciphering" it) is ##<_f##. We want to know that whether ##<_f## is a well-order relation or not. First we verify the ##<_f## is a linear order (on ##\mathbb{N}##). After that, the idea is to proceed in stages. Intuitively, at each stage ##\alpha## we have stored all the elements that occur in positions ##<\alpha## and we want to determine the ##\alpha##-th minimal element (if one exists). So we just use a set ##A_\alpha## that changes as the stages proceed. Initially we have ##A_0=\phi##. For convenience, we let ##B_\alpha=\mathbb{N}-A_\alpha## denote the complement of ##A_\alpha## (note that initially ##B_0=\mathbb{N}##).

At beginning some stage ##\alpha##, suppose the elements of ##B_\alpha## in strictly increasing order were ##b_0,b_1,b_2,b_3,b_4,...##. First we set-up a cell which serves as flag (it will be set to ##0## initially before we start our processing in each stage). We set up a variable (say ##v##) which is initially set to ##b_0##. Furthermore, we have a counter that ranges from ##i=0## to ##i<\omega##. For each ##i## we make the comparison ##b_{i+1}<b_i##. If this comparison turns out to be false, we do nothing and increase ##i## by ##1##. If the comparison ##b_{i+1}<b_i## turns out to be true then we check whether ##b_{i+1} \in B_\alpha##. If both conditions in previous sentence turn out to be true, then we do two things: (1) Set the variable ##v## to ##b_{i+1}## (2) simply change the flag cell to ##1## and back to ##0## again. In the limit, when ##\omega## stages pass, we can simply check the flag. If it's ##1## we return false (the given ##<_f## doesn't describe a well-order). That's because the value ##1## in the flag cell (at the end of a stage) implies that it oscillated infinitely often in the limit. If the flag is ##0##, then suppose the value ##V## was stored in the variable ##v##. We set:

##A_{\alpha+1}=A_\alpha \cup \{V\}##

##B_{\alpha+1}=B_\alpha - \{V\}##

Note that since values are only ever added to ##A## and only ever removed from ##B##, there is an obvious way to form these sets on limit stages. We can halt and return true when we discover that, at some stage ##X## we have ##A_X=\phi## (or alternatively ##B_X=\mathbb{N}##).The above paragraph is similar to how the procedure is described in the paper. It might be added (in case it isn't clear) that there is a slightly less efficient way to find ##A_{\alpha+1}## (from ##A_\alpha##) too. At each stage ##\alpha##, we are just looking for an element ##x## with the following property:

##\exists x \in B_\alpha \forall y \in B_\alpha ## ##[ x \neq y \rightarrow x<_f y ]##

When we find this element, it will be removed from ##A## and added to ##B## (in preparing for the next stage). If such an element exists for each stage (until we run into a stage where ##A## becomes empty) then ##f## encodes a well-order. Otherwise, if such an element doesn't exist at some stage (before ##A## becomes empty), then ##f## doesn't encode a well-order.

Now, as far as I can understand, exactly the same method can be used to discover the initial well-ordered segment for a recursive linear-order. Something similar is done in theorem-3.2 (page-18) [though I don't fully understand whether there is a certain advantage or necessity to using a r.e. relation instead of a recursive relation ... it shouldn't matter though].

But now we won't reach a stage ##X## so that ##A_X=\phi## (as mentioned in quote above). Instead we will reach a stage where an infinite descent will be discovered (and it will become clear that the given relation doesn't form a well-order). Meaning that there will be no ##X##-th minimal element.

Now suppose the initial ordinal is equal to ##\omega_{CK}##[which I will denote as ##W## for the rest of this post] for the given linear-order. Let's define a function ##f:W+1 \rightarrow \omega+1## so that for any value ##x \leq W## the value ##f(x)## gives us the number of descents (to reach ##x##). Note that we have:
##f(x)<\omega## for ##x<W##
##f(W)=\omega##
One thing that I noticed (about a month ago) is that we can define a function ##g:W+1 \rightarrow \omega+1## such that:
##g(0)=f(0)##
##g(x+1)=g(x)##
##g(x)##=lim inf of all values ##h(i)## (where ##i \rightarrow x##)

One interesting thing I was able to show was that:
##g(x)<\omega## for all ##x<W##
##g(W)=\omega##
So actually, the first time ##g## becomes equal to ##\omega## is exactly at time ##W##. I have skipped the proofs (I am reasonably confident about them) here to avoid getting side-tracked. Anyway, that result doesn't seem to change things much. That's because, for example, to simulate this kind of function reliably using a tape, we would keep changing the number of ##1##'s and ##0##'s on, say some allocated tape. I couldn't think of any other reliable method that worked better than this. Eventually when the value of ##g## is equal to ##\omega## at some point, the whole tape will be all ##0##'s or all ##1##'s (depends on cell convention). However, to scan through and confirm this requires an extra time equal to ##\omega##.@Deedlit
Nevermind. I posted a certain idea (not the one mentioned above), but I think at this point it is too incomplete and tentative at this point (and probably, it doesn't work anyway). I will ping in-case there is something substantial to add.
 
Last edited:
  • #18
Actually, I do think that there maybe a question here w.r.t. post#15,16. When I wrote (in prev. post):
Post#15,#16 can be skipped. The idea I was thinking about in that direction doesn't work.

The direction I was thinking about was an iterative procedure which, in the end, seems to show that there exist bad ordinals and the smallest one must definitely be less than ##c##. The definition of good and bad ordinals is:
For any ##\alpha##, let ##\alpha^+## denote the smallest value that is greater than ##\alpha## but belongs to range of the function ##x \mapsto \omega^{CK}_x##.
good-ordinal: ##\alpha^+=sup(\alpha-recursive)##
bad-ordinal: ##\alpha^+>sup(\alpha-recursive)##

==============================

I was a bit surprised that a simple construction could show this existence [but ofc it uses the theorem from paper in OP along with one more theorem ... which probably contain much of the substantial content]. But it didn't seem that one could proceed any further with that construction (and hence the quoted part above). But, after some more thoughts on this, I think there might be a genuine underlying question (to which I do not know the answer).

I will first link all the references that are used in post#15,#16. Then I will describe in detail the iterative procedure which I was talking about.

==============================

First a link to relevant papers:
A---- https://arxiv.org/abs/math/9808093
B---- https://www.sciencedirect.com/science/article/pii/S0168007209000086
C---- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.3332&rep=rep1&type=pdf
D---- https://www.sciencedirect.com/science/article/pii/0003484379900251

I should also add that I am a complete newbie on this topic (and well almost all topics). I merely learned about (existence of) papers C and D from this question/answer(link).

What I have referred to in post#15,16 and above are merely some easy-to-understand statements made in the above papers. Here are all of them in order:
----- The paragraph in post15 relating various computability notions is just a paragraph in B towards the end of page-14 (towards the end of section-3).

----- Regarding the relations between a-recursive and a-r.e., it is mentioned in beginning of section-2 of D

----- Regarding what I mentioned in this post, see the beginning (first-page) of D

----- Also, for any a-recursive linear-order (on a), the length of initial segment can never exceed ##a^+##. This is mentioned at the very beginning of proposition-2.1 (in section-2) of D

----- Also, regarding the last theorem in A, one question is does it generalise to OTMs? Because it is possible that a few small changes in computational model will change the applicability of the theorem (in fact, this becomes even more clear considering the result I described in post#17).

I tried to search for it quite a bit. I was able to find a seemingly reliable reference stating that the theorem does generalise to OTMs. I might link that later on.
 
Last edited:
  • #19
Here is a sketch of the construction (continued from post#16). As I mentioned there:
Let's consider the task of categorizing what a specific program is doing for an arbitrary ##a \in Ord##. We have the following:
(1) It calculates an a-partial computable function but not a linear-order
(2) It calculates an a-partial computable well-order
(3) It calculates an a-partial computable linear-order that isn't a well-order

Note that the underlying set for orderings in (1), (2) and (3) is assumed to be some subset of a.
So we can classify and write-down the underlying indexes as ##A_1,A_2,A_3## respectively. Note that we have:
##A_1 \cup A_2 \cup A_3 =\mathbb{N}##
I also described how we can rule out some transitions as we move from a stage ##a \in Ord## to a higher stage ##b \in Ord## (where ##b>a##). The basic idea is quite simple. If we carefully observe the transitions that were ruled out (second-half of post#16), it becomes clear that:
----- the set ##A_2## will have never any elements added to it as we increase the ordinal value (at which it is evaluated), but instead values will be possibly eliminated from this set as we increase the value.

So now if we take the definition of function ##F##(second last paragraph of post#15) and the assumption I described (last paragraph of post#15), an obvious idea is to iterate and proceed in stages. That is, first we evaluate ##A_2## at point ##\omega## [I will denote this as ##A_2(0)##]. Then we continue this and evaluate ##A_2## for sup of ##\omega##-partial recursive well-orders. This value will be ##F(1)=\omega^{CK}_1## (as ##\omega## is good). We denote the value of set ##A_2## at ##F(1)=\omega^{CK}_1## as ##A_2(1)##. In general the value of ##A_2## at any point ##F(a)## will be denoted as ##A_2(a)##.

At limits we have to be a bit careful. For example, if ##l \in Ord## is a limit, it is by no means necessary that ##A_2(l)## is the intersection of all ##A_2(i)##'s for all stages ##i## below ##l##. For example, if we define a set ##B(x)## (##x \in Ord##) which varies as ##x## changes, then we might want to define it for all limit stages (and say, constant between limit stages). For any limit stage ##l##, we could define ##B(l)## as the intersection of all ##A_2(i)##'s for all stages ##i## below ##l##. Then in general the following is not true for limit stages:
##A_2(l)=B(l)##

Note that when we arrive at some limit stage ##p##, then calculating ##B(p)## for that stage is a relatively inexpensive operation. That is, we will be able to do it well before ##F(p+1)## time [usually values in the range of ##F## should be sufficiently closed]. If it ever occurred the first time that ##A_2(p)=B(p)## there would be a program that could definitely halt at a time equal to ##F(p+1)## exactly. So we might ask ourselves: does such a stage ##p## ever occur? I believe the answer should be yes. In particular, whenever ##p## is a limit greater than or equal to ##c## (supremum of halting positions), we also have ##F(p) \geq c## obviously. At this point if it occurred that ##A_2(p) \neq B(p)## then that would mean that there are some indexes missing from ##A_2(p)## but present in ##B(p)##. If we consider the smallest such index it would be some specific natural number. This would give a finite signal to the program to halt at a point ##\geq c##. This is impossible by definition of ##c##.

Also observe that it should not be difficult to conclude that ##F(c)=c##. That's because if we assumed ##F(a) \geq c## for some ##a<c##, it is not derive a contradiction.

So now we have established ##p<c##. But now ##F(p)## must be a bad ordinal. If it weren't then, by definition, we would have:
##F(p+1)=S(F(p))=[F(p)]^+##
But then, as I mentioned, we have a program that (on empty input) halts exactly at time ##F(p+1)##. This can happen because when we would, at every limit stage, check each index in ##B(p)## for being a well-order (on some subset of ##F(p)##) ... the check will prove true for every single value in ##B(p)##. The given program would halt just the first time it encounters this kind of stage (where every index in ##B(p)## proves to be a well-order). And the time it would require to detect all the well-orders (and make the set ##B(p)## empty) would take it exactly to ##F(p+1)##. Halting on ##F(p+1)## contradicts that all values in range of ##x \mapsto \omega^{CK}_x## are impossible to halt at (from empty input).

Hence due to last paragraph we must conclude that ##F(p)## is bad.

P.S.
One thing to note is that generally determining all elements in the set ##A_2(a)## can't be assumed to be an inexpensive operation. This detection would generally be expected to take more than ##F(a+1)## time, but not much more than that [making the assumption towards the every end of post#15] ... so we will end up doing this well-before ##F(a+2)## time.

But this construction itself isn't enough. Some more work is required to frame the question I was thinking. I will put that a bit later. Meanwhile if I have been unclear on some part in this post, please do mention.

Edit:
It has occurred to me there is a significant complication in the "construction" above. Just focusing on set ##A_2##, the way the procedure works is to check (on each stage-a) for all inputs ##x<F(a)## whether a program with given index runs for a time less or greater-equal to ##F(a)##. Whenever the run-time exceeds ##F(a)## the given input is supposed to be ignored (equivalent to looping).

However, it is entirely possible of course that for a given value ##x<F(a)## the computation runs for a time greater than ##F(a)## and halts after that. So, in a direct sense, what is being done in the procedure described in the post doesn't match with the notion of ##F(a)##-partial recursive function [which is either run for time less than ##F(a)## or loop].

If this issue is not salvageable then what I wrote above doesn't work (meaning it doesn't show what it claims). But if it is, then there shouldn't be any problem I think. But if would require justification nevertheless. I think it might possibly be salvageable completely (but it is difficult to see directly, I wouldn't be surprised if I am mistaken though). The reason is that we are only interested in a very limited region ... namely below the smallest limit stage ##p## where the set ##A_2## has no missing index [meaning ##A_2(p)=B(p)##].

So if we can mange to show that for any stage equal or less than the ones we are interested in the functions considered by our procedure (as in previous paragraph) can be made to match the notion of ##F(a)##-partial recursive function then it should be fine. One way it might be possible could be to show that, for the stages we are interested in, we can always modify a program (with any index) so that whenever it runs a time greater than ##F(a)##, we can make it loop. I am thinking that perhaps the missing index values (which are just natural numbers) can help us with it?

Anyway, this part should be considered quite carefully.
 
Last edited:
  • #20
I think at this point I have been able to convince myself (to some degree) that the issue mentioned in "Edit:" in previous post#19 is manageable. Note that what we want is that for any limit stage ##l## [which is less-equal than the first limit stage where there is no missing index, call it ##p##] if a program halts at a time greater-equal than ##F(l)## for some input less than ##F(l)##, then we want to be able to make it loop for all such inputs. And I think the reasoning why it should be possible for any limit ##l<p## would be due to the fact that ##l## has a (smallest) missing index(that is, ##A_2(l) \neq B(l)##). Sure the (smallest) missing-index for some limit ##k<l## will be discovered a bit later at a time little greater than ##F(k+1)##. But that doesn't seem like a problem to me (it would be a problem, say, if the missing index for stage-k was being discovered at a time greater ##F(k+\omega)##.

So, for any input ##x##, to the program we modify it such that it searches for which "region" it falls into (the one "before" ##F(l)## or the one "after" it). and proceeds based upon that. The main thing to ensure is that if (for limit stage-##l##) the input is smaller than ##F(l)## what we want is that this search process does not take us beyond ##F(l)##. If it does go beyond, then it doesn't help us much. And at least, on the face of it, it seems to me that this condition can be satisfied here.

==============================

Anyway, now in this post I will try to proceed further and just try to frame the main question I am trying to get at. I should add that I will be a skip a few steps in this post. Because I feel that I should try to get to the main question first. Of course, the main question being well-formed does require that all the intermediate steps do work out (and each of the steps need to be checked carefully). But, for now, I will leave some of this for later and just give a very brief sketch of the rest of what I am thinking.

Now one thing that we need to ask ourselves is that if we denote ##p## as the smallest limit stage with no missing-index, then if we start counting the "number of times" missing indexes occur (say on limit-stages) what this counting will lead us to? Meaning what ordinal value (call it ##r##) will this count equal to? Excluding some boundary-cases (which need to be looked at separately), it seems that if ##p## is sufficiently "rare", then there are two main possibilities:
(a)------ ##r<F(p)## (and ##r## is a limit)
(b)------ ##r=F(p)## (and ##r## is a limit)

I think that if we consider the possibility-(a), it seems a bit difficult for me to see how it can be true. The thing is that, if (a) is true, then basically I think that would imply that there is program (with no input) which has a non-decreasing variable which equals ##\omega## for the first time at runtime ##F(p)##. Why this would happen requires some detailed justification. To summarise though, I think this should be true because as I see it for every limit stage ##l<p## [but the boundary cases here also need to be considered carefully, as they don't seem trivial], it seems that we can construct a program that writes-down a strictly increasing ##\omega##-sequence for ##F(l)## and then halts a little later (the halting signal should be provideable by missing index for stage-##l## I think). And the main reason for why it should be possible is simply that all values of form ##F(l)## (where ##l## is a limit that is multiple of ##\omega^2## and less than ##p##) can be assigned a sequence using the missing indexes.

So when we claim ##r<F(p)## and also that it is a limit, if we combine this with last paragraph (giving us a sequence for ##r##), this is giving us a sequence for ##F(l)## itself [since ##r## is just counting the missing indexes, which are "embedded" (so to speak) in ##F(l)##].

Why I think that matter is that in this case, I get the feeling that if the above is true [a non-decreasing variable getting to ##\omega## first time at ##F(p)##], then using the fact that stage-##p## has no missing indexes, we can actually construct an ##F(p)##-partial recursive function which represents an order-type equal to ##sup(F(p)-r.e.)## [which has to be impossible]. Once again the why of this requires some fairly detailed justification [again skipping it for now].

I should add that between the:
---- precise justification required for Edit: in post#19
---- the boundary cases for above mentioned point
---- each of the two paragraphs above

Quite a detailed explanation is required for these. It would probably require four or five pages (or possibly even more). But I have described the salient reasons why I think they should be true though.

To be honest though, before proceeding, while I do have somewhat detailed sketches for the previous statements in mind, there is a fairly non-negligible probability that I have made a mistake in the statements above. So, until I write all of it in detail, this possibility is in my mind.

I will try to write a more detailed explanations for the above statements later-on (and if I happen to find a mistake, point it out). But for now, I will just move to the question.

==============================

So assuming the above statements to hold. That is assuming that the count of missing indexes (denoted by ##r##) indeed equals ##F(p)##, there is a question that one can ask.

Denoting the set of smallest missing indexes at each stage by ##I##, it seems to me we now essentially have our ##F(p)##-partial recursive programs being capable of emulating any ordinary program that has access to a well-order of ##I \subset \mathbb{N}## (with order-type ##F(p)## itself). But this is very similar to the definition of ##\omega^{CK}_{\alpha}## (all I know about this is from an answer to the question I asked here)? Note that if ##F(p)##-partial recursive programs are truly able to emulate an ordinary program with an access to a "copy" of ##F(p)## (see the link) this contradicts ##F(p)## being a bad ordinal.

Consider any ##i_1,i_2 \in \mathbb{N}##. More precisely now for each of the following possibilities we can think of an oracle which:
------- returns ##0## or ##1## if both ##i_1,i_2 \in I##
------- gets into a loop if either of ##i_1,i_2## don't belong to ##i##
And now far as I know, when we say that such and such set defines a well-order of a subset ##S## of ##\mathbb{N}##, then for comparisons where one or both elements are not in ##S##, we are simply supposed to loop? No? Or am I being mistaken somewhere?

If we were treating a subset ##I \in \mathbb{N}## as a copy, things are a bit different though.
Now for each of the following possibilities we can think of an oracle which:
------- returns ##0## or ##1## if both ##i_1,i_2 \in I##
------- say returns ##2## if either of ##i_1,i_2## don't belong to ##i##

So in-case everything I wrote previously before holds, then would that mean for a subset ##S \subset \mathbb{N}##, the definition of ##\omega^{CK}_{\alpha}## is only valid for a copy-based definition and the definition based on well-order of ##S## is strictly weaker?P.S.
I have been thinking of a certain way to extend the idea so that above point (about distinction between copy and subset of ##\mathbb{N}## serving as well-order) can possibly be by-passed. But I think first there are too many points before for which I need to check validity. And I might be mistaken about this extension anyway (at any rate, I think I will try to think about it after establishing validity or lack of it for previous points).
 
Last edited:
  • #21
This month, I took some time to think about the construction in last three posts (along with post#15,post#16). It was clear (after five or six days) that there was an assumption made by me that wasn't warranted by the one links I quoted. For Ref(B) in post#18, the equivalence of ##p##-partial computable and ##p##-partial recursive that I described was warranted only for admissible values in that reference.

So, quite briefly, one question is that could this construction be recovered. It do not whether it is possible or not (though this is not what I have been thinking about). For concreteness consider the ordinal ##p## which is the limit of ##\omega^{CK}_i## (where ##i \in \mathbb{N}##). Now ##p## isn't admissible. And when we draw out ##p##-partial computable functions, the equivalence to ##p##-partial recursive functions isn't warranted by Ref(B) in post#18. Since I don't know anything about ##\alpha## recursion theory, I can't comment on this anymore than that.

Now as I mentioned towards the end of the last post:
I have been thinking of a certain way to extend the idea so that above point (about distinction between copy and subset of N serving as well-order) can possibly be by-passed.
I thought of a construction (after finding out the issue mentioned in the beginning of post) that seems to:
---- by-pass the distinction mentioned in quote above (since it is not really used anywhere in that construction to be described)
---- fixes the problem of worrying about (lack of) equivalence between ##p##-computable and ##p##-recursive.

Other than that, while it is definitely not an extension of the method above(post#15---post#19), it does take a number of significant aspects from that method. Also, my description in post#15---post#19 was quite haphazard. That's because writing almost a few hours (or a day) after thinking about the certain topic.

I have taken two weeks to think about this construction, of course while also trying to look for any concerns of accuracy (which is still on-going). Nevertheless, I have also written the outline of the argument a couple of times on paper (and also, at least once, organised the sketch for a reasonably detailed outline of the argument). So this time the description will be much more organised (and also easier to read).

One thing is that a number of the references that I used before will be re-used. However, some of the statements mentioned before will not be used directly. So to emphasize the main point directly, I will re-link many of the references before. Sorry for the repetition here. There will also be one or two new references.

So, within a day or two, I will write:
(i)---- list of references used (along with the main statements/theorems used from the reference)
(ii)---- a summary of what the argument/construction is about (meaning what does it intend to show)
(iii)---- a fairly detailed sketch of the construction (but with a number of details left out)

The purpose of the sketch is to be sufficiently detailed that the main point of the construction can be fully understood (or at least "almost fully understood") after reading it in detail.

Now after writing this out in next few days, I will try to write-out:
(iv)---- a fairly detailed document that also covers much of the details left-out in the sketch

I feel that writing such a document in detail might also help me to spot any accuracy issue (in-case there is one) ... since I have gone through the basic overview more than few times now, going through one more level of detail might be helpful now [since if there is a mistake, then either it is caused by some basic misunderstanding on my part ... OR "possibly", I am repeating the same mistake over and over when I am repeating the basic overview of the argument (both in writing and in my head)]. If I do not find any issue till writing it, I will try to upload that document. The reason for uploading as a document is due to (a) length of document (which would be unsuitable for a post) (b) possiblity of changing minor errors or organisation issues and re-uploading.

I suppose if I don't find any accuracy issue in the construction till uploading a detailed document, then it is likely to be suitable mathoverflow question. But regardless of whether this is the case, that is for later. Right now I will try to post (i),(ii),(iii) mentioned above in the next day or two.
 
  • #22
Background:
I will consider the summary of what the main argument is about. That is, what does the argument (that is described later on) intends to show? Alongside this, I will also list the references that will be used (along with the definitions or theorems that will be used in the argument).

First of all, the main argument is related to a definition of admissibility. We will denote the ##x##-th admissible ordinal as ##\omega^{CK}_x##. The function ##x \mapsto \omega^{CK}_x## is not a continuous function. That is, in general, there are many limit values ##x## such that ##\omega^{CK}_x## is not equal to:
##sup\{\omega^{CK}_i : i<x\}##
In the argument, I will use the function ##x \mapsto \omega^{CK}_x## alongside with a closely related function which I will denote as ##x \mapsto \beta_x##. This latter function can be thought of as a "continuous completion" of former one (in a manner of speaking) --- and indeed this can be made fully precise. In Ref.[1] all the detail regarding this function can be found (it was a question posted by me). One difference is that there I started from ##\beta_0=\omega^{CK}_1=\omega_{CK}##. Here, through-out we will use ##\beta_0=\omega=\omega^{CK}_0##. For example, this means that ##\beta_4## in the orginal question will be denoted as ##\beta_5## here. For values ##i \geq \omega## there will be no difference.

Now let's consider Ref.[2]. In this reference, the notion of a "good ordinal" and "bad ordinal" is described. Now for any ordinal ##\alpha##, ##\alpha^+## is used to denote the smallest ordinal that is greater than ##\alpha## and is admissible. Relevant quote from first paragraph of Ref.[2]:
"Roughly speaking, our main concern is the question of when and how ##\alpha^+## can be defined in terms of ##\alpha##-recursion theoretic concepts, where for a given admissible ##\alpha##, ##\alpha^+## is smallest admissible greater than ##\alpha##."
From the above quote, we can conclude that when we have:
##\alpha=\omega^{CK}_x## (for some ##x \in Ord##)
then we can write:
##\alpha^+=\omega^{CK}_{x+1}##

Furthermore, in the first page, it is mentioned that the identity:
##\alpha^+=|\alpha##-##recursive|##
can be true or false. When the identity is true it means that ##\alpha^+## is the least ordinal not representable as an ##\alpha##-recursive well-ordering of a subset of ##\alpha##. When the above identity is true ##\alpha## is called good and when it is false ##\alpha## is called false. Furthermore, in the beginning of section-2, the following identity is mentioned:
##|\alpha##-##recursive| \leq \alpha^+##
From the above, it seems that we can conclude the following characterisations of a good and bad ordinal:
good ordinal: ##|\alpha##-##recursive|= \alpha^+##
bad ordinal: ##|\alpha##-##recursive| < \alpha^+##

Now with this we proceed further and consider Ref.[3]. Defining the notion of an ##\alpha##-TM and ##\alpha##-RM. From the last paragraph of section-3:
"The obvious next question has to do with runtimes. If the OTM in question is an ##\alpha##-Turing machine, i.e. halts in fewer than ##\alpha## steps (or not at all) on inputs less than ##\alpha##, can we compute the same function ##f## using an ##\alpha##-register machine? (And conversely?)"
And a similar definition ##\alpha##-RM is given:
"An ##\alpha##-register machine halts in fewer than ##\alpha## steps (or not at all) and never writes any ordinal ##\geq \alpha## in any register."

Later in the paragraph, it is mentioned that for admissible ##\alpha##, the ##\alpha##-TM and ##\alpha##-RM both compute precisely ##\alpha##-partial recursive functions. Let's call (for any ##\alpha \in Ord##) the functions computed by ##\alpha##-TM and ##\alpha##-RM as "##\alpha##-TM partial computable" and "##\alpha##-RM partial computable" respectively. So we have the following collections of functions as equal (at least for admissible ##\alpha##):
(1) ##\alpha##-partial recursive functions
(2) ##\alpha##-RM partial computable
(3) ##\alpha##-TM partial computable
With (2) and (3) it seems fair to call them "##\alpha##-partial computable" functions for brevity (since, with a lot of fairly reasonable infinite computation models, we will get a similar equality for admissible ##\alpha## at least).

Now I am assuming that "##\alpha##-recursive functions" simply refer to collection of all those ##\alpha##-partial recursive functions which are total. Similarly we can use the term "##\alpha##-computable functions" to refer to collection of all those ##\alpha##-partial computable functions which are total. Because of equality of (1),(2),(3) above we get the following collections of functions as also equal (at least for admissible ##\alpha##):
(4) ##\alpha##-recursive functions
(5) ##\alpha##-computable functions

So now, if we set-up a suitable computational model which precisely the collection of functions (1),(2),(3) is partial computable (for admissible ##\alpha##) --- then we have the definition of ##\alpha##-computable function as:
"For each ##x \in \alpha## the program must halt in less than ##\alpha## time"
Also, note that with an ordinal register machine (ORM) style program, once we have ##\alpha## of the form ##\omega^x## (for some ##x \in Ord##), then there is no need to place any other restriction then placed in quote above (this is also mentioned in Ref.[3]).

Now till now I have essentially repeated a number of references (and statements/definitions) that I mentioned in post#15 and post#17 (in a more organised way). Now we come to reference which is closely related to the argument in question. Now admissibility is a technical set-theoretic notion (which I don't understand), which somehow happens to be related to definition I described in the question in Ref.[1]. But not quite though, as in general ##\omega^{CK}_x## isn't equal to ##\beta_x## --- as covered in detail in the beginning. Now in Ref.[4] a definition that is related to notion of infinite programs is given. Theorem-9 in Ref.[4] says:
"An ordinal ##\alpha## closed under ordinal exponentiation is admissible iff there is no ##\alpha##-computable function ##g## that maps some ##\beta < \alpha## cofinally into ##\alpha##."

Now there are some important points that need to be discussed w.r.t. the theorem above. First of all, it seems that in the Ref.[4] the phrase "##\alpha##-computable" is supposed to have somewhat different meaning from how I used the same phrase above. It seems that the author defines this notion under "Definition-2" and the author seems to use the phrase "##\alpha##-computable" for a different notion than say total ##\alpha##-RM functions.

But, to be honest, I noticed this just now. I will just briefly mention that:
(a) The argument I wrote assumed that the phrase ##\alpha##-computable was used by the author in the sense of all total ##\alpha##-RM functions. However, that doesn't seem to be the case upon reading "definition-2". So I will need a some time to think whether this will cause a major issue in my argument or not.

(b) I will stick with the terminology of distinguishing between "##\alpha##-partial computable" and "##\alpha##-computable" (as I described previously) for the rest of this post (and in the argument after that). The former phrase I will use in the sense of all ##\alpha##-RM functions (including the partial ones) and the latter phrase I will use in the sense of all ##\alpha##-RM functions (just including the total ones).

Now further, in Theorem-9 the notion of cofinality of a function ##f:\beta \rightarrow \alpha## (##\alpha,\beta \in Ord##) is also mentioned. When I searched it for online, I found a book Ref.[5] that describes clearly what this terminology means. It is described in section-5.3("Cofinality") of that book. A function ##f:\beta \rightarrow \alpha## is cofinal in ##\alpha## if:
##\forall x \in \alpha \exists y \in \beta (f(y) \geq x)##

Summary:
Now suppose we set-up a suitable infinite program model. For the rest of this post (and the following argument) the phrase "program" means a program from this model. Let ##C## denote the supremum of all values that serve as halting positions (on empty input). I think it can be shown that ##C## is admissible and furthermore that ##\beta_C=C##.

Now assuming that my main argument/construction is correct, what I will try to show is that the following two statements contradict each other:
(1) Theorem-9 in Ref.[4] with the assumption which I described in the "Background" section.
(2) There exists some good ordinal ##q## such that ##q \geq C##.

The way I will try to do it by considering a specific program (in one variable) which always computes a (total) function ##F## from ##Ord## to ##Ord##. For any given value ##p##, ##F_{\alpha}## can be thought of as restricting the domain of ##F## to ##\alpha## and the codomain to:
##sup\{F(x)|x<\alpha\}##.
The way that our program will work is in stages. At any given stage-##s## the program has settled all inputs ##< \beta_s## (this will be discussed in detail in the sketch of the construction).

I will try to show that if (2) is true then there exists a stage ##p \geq C## (with ##\beta_p=q \geq C##) such that the function ##F_{q+\omega}## has co-domain ##\beta_{p+1}=q^+##. If we notice the definition of cofinality described in Ref.[5] and compare with the definition of codomain of ##F_{q+\omega}##, we can conclude that the codomain of ##F_{q+\omega}## being equal to ##\beta_{p+1}=q^+## means its cofinality is also ##q^+##.

On top of what I mentioned above, I will also try to make a case for the function ##F_{q+\omega} \rightarrow q^+## being ##q^+##-computable (in the sense in which I described). Now since ##q^+## is admissible, the existence of ##F_{q+\omega}: q+\omega \rightarrow q^+## conflicts with (1) above.

Now one question is whether there is a good admissible ordinal ##\geq C## (re-call that ##C## can be shown to be countable). For Ref.[6], reading the statement of theorem-5, along with the first line in theorem-6, seems to be highly suggestive that this is the case. But still, I am not a logician and this is a very technical paper. A logician could probably confirm whethere this is the case or not. Nevertheless, I will try to look at whether this is true or not independently, based on cofinality based definition. But it seems that first the accuracy of the argument that follows is important (alongside with whether the assumption about Theorem-9 in Ref.[4] holds or not).

References:
[1] https://math.stackexchange.com/questions/2907733
[2] https://www.sciencedirect.com/science/article/pii/0003484379900251
[3] https://www.sciencedirect.com/science/article/pii/S0168007209000086
[4] https://www.sciencedirect.com/science/article/pii/S0168007209000098
[5] Set Theory for the Working Mathematician (1997)
[6] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.3332&rep=rep1&type=pdfP.S.
One of the reason for writing the background section in so much detail was that in-case I made a big error in understanding something I quoted or used in the argument, then it would be clear and easy to identify.

The next post I write will contained a detailed sketch of the argument/construction (obviously it assumes that I haven't made a big/unfixable error in understanding something mentioned in background).
 
Last edited:
  • #23
Sorry the previous post once again got more confusing (unfortunately) than I would have wished. Let me describe the main issue separately before I move on to the sketch of the main construction.

In Ref.[3] the notion of an ##\alpha##-RM and ##\alpha##-TM functions was described towards the end of section-3. Furthermore these two notions (and the separate notion of "##\alpha##-partial recursive functions") are equivalent for admissible ##\alpha## and we may reasonably use the phrase "##\alpha##-partial computable functions". Within that collection of functions, I used the phrase "##\alpha##-computable functions" for those functions which are total. In Ref.[4] Theorem-9, the notion of "##\alpha##-computable functions" is used. However, the phrase is defined in definition-2 it seems to suggest the phrase may have a different meaning there? Unfortunately I didn't notice this until I almost completed the previous post entirely.

That's because if the notion of "##\alpha##-computable function" can really be taken to mean (in Ref.[4] Theorem-9):
"those ##\alpha##-partial computable functions which are total"
Then in that case I am fairly confident that my argument should work. If not, then it certainly seems to become more complicated.

EDIT:
Due to confusion caused by this post (unfortunately), I have been trying to think of precise characterisations in which I am reasonably confident that the argument/construction that follows (in next post) will work (as it is). I think I have managed to characterise all those conditions in a fairly precise way.

So here are the three precise characterisations:
(1) Checking the property of admissibility for any ##x \in Ord## is decidable and furthermore decidable in time that can be easily bounded below ##<x^+## (using some ordinary function in ##x##). What I mean, more specifically, is that for any admissible ##x##, checking for admissibility for all values ##\geq x## and ##<x^+## should just take exactly ##x^+## time.

(2) The notion of "##\alpha##-computable function"(in Ref.[4] Theorem-9) can be taken to mean:
"those ##\alpha##-partial computable functions which are total"

This was already mentioned above. The main thing is that (2) implies checking admissibility for any ##x \in Ord## is decidable and furthermore decidable in a time which should be some ordinary function in ##x##.

(3) The terminology used in Definition-8 and Theorem-7 in Ref.[4] seems to be suggestive that one can use graphs of ##\alpha##-computably enumerable sets to represent total functions (for theorem-8). However, I am quite unsure since I don't understand the logical terminology well.

It seems that in this case too that the argument will work. That's because it seems to me that total functions that are represented by graphs of ##\alpha##-computably enumerable relations might be fairly easy to use to decide for admissibility (and hence once again implying the required condition in (1)). Though I will need to check this more carefully.

==========

Now I will proceed with the argument. In the argument I will assume (1) [since it is a more generic assumption compared to (2),(3)] and proceed assuming it.
To complete the argument [assuming (2) for example] would then simply require a brief description of the simple procedure [assuming (2), it will be very simple] that decides admissibility.

Also in the detailed sketch below, unless a qualification is made, I will mean:
##\alpha##- partial computable function = ##\alpha##-RM functions
##\alpha##- computable function = those ##\alpha##-RM functions which are total

EDIT2:
Sorry for another edit, but I thought of another few important points that seem to be worth mentioning. Now, after thinking about it, it seems that even if in Ref.[4]--Theorem-9 by "##\alpha##- computable function" the author meant ##\alpha##-partial computable functions (the way I used it), it is quite likely that my argument below (in next post) still isn't affected by this.

One would need a separate part in that case though, which would be about showing/explaining how this doesn't make a difference. I won't focus on this right now though. I will try to add the relevant details when writing a more detailed document. What I describe in the sketch below nevertheless should remain the main part of the argument.
 
Last edited:
  • #24
Sketch:
Now I will discuss the sketch of the construction in some basic detail. We will assume the access to a predicate of the form ##isAdmissible:Ord \rightarrow \{0,1\}## (which is assumed to be total). Furthermore, we will assume that for any ##x \in Ord##, the decision procedure takes a very reasonable amount of time (bounded by a fairly ordinary growth function in ordinals). One necessary implication would be that time to compute ##isAdmissible(x)## for any ##x \in Ord## would be less than ##x^+##.
[Actually, somewhat weaker condition(s) than the above paragraphs could suffice (since for some ##X \in Ord## we don't have to be running the "admissible check" for all ordinals values ##<X##, and a certain proper subset of ##X## could suffice). Let's not go into this complication for now, as it will detract from the main point. I will try to address all the smaller points of this kind when writing a more detailed document]

At the risk of too much redundancy, I will repeat an important point that has been mentioned before. Regardless of whether we interpret Ref.[4]--Theorem-9 as:
(a) "An ordinal α closed under ordinal exponentiation is admissible iff there is no ##\alpha##-computable function g that maps some ##\beta<\alpha## cofinally into ##\alpha##."
OR
(b) "
An ordinal α closed under ordinal exponentiation is admissible iff there is no ##\alpha##-partial computable function g that maps some ##\beta<\alpha## cofinally into ##\alpha##."

For now, (ignoring some book-keeping details) either of these conditions will be enough for our us to provide us with a concrete function ##isAdmissible:Ord \rightarrow \{0,1\}## (as a specific program) which we require. The case for (a) is quite direct. The case for (b) requires a more careful argument (which I will cover in details later). The worst-case scenario that I see for (b) is that perhaps we may have to use a slightly stronger condition than "closure under ordinal exponential" (but closure under exponentiation may be enough actually) --- this depends on some book-keeping details.

In any case, with the assumption of the predicate ##isAdmissible(x)## available to us, we proceed further. Before going further there is one other point that I didn't mention clearly enough before. I have been writing:
##\alpha##- partial computable function = ##\alpha##-RM functions
##\alpha##- computable function = those ##\alpha##-RM functions which are total

Assuming that we only deal with ordinals of the form ##\omega^x## (##x\geq 1##), can we also use the following?
##\alpha##- partial computable function = ##\alpha##-TM functions
##\alpha##- computable function = those ##\alpha##-TM functions which are total

Since the OTM model was considered in Ref.[4], it appears that this may have been used (in that reference). Once we know that for ordinals of interest (say ones closed under exponentiation or slight stronger condition), ##\alpha##-TM functions are exactly the same as ##\alpha##-RM functions then we don't have to worry about this distinction anymore (again this is one of the points that requires a careful book-keeping style work).

Now let's proceed further. First let's re-call that we are considering a program which computes a function ##F:Ord \rightarrow Ord##, and furthermore the given program halts for all inputs ##x \in Ord## (that is, ##F(x)## is always well-defined). The program proceeds in stages. So we have a separate variable (marking the ordinal valued stage number) which starts from ##0## and only ever increases. This variable indicates the stage in which are in. With any stage ##s##, we will associate three subsets of ##\mathbb{N}## which we write ##T_s,A_s,B_s##. These supscripts are meant to indicate that these sets change as the stage of the program changes.

The main property of ##A_s## is that values are only ever removed from it. That is if we have ##x<y## (both ordinals) then we always have:
##A_x \subseteq A_y##
The main property of ##B_s## is that values are only ever added to it. That is if we have ##x<y## (both ordinals) then we always have:
##B_x \supseteq B_y##
Furthermore, for any given stage ##s \in Ord## we have:
##A_s \cup B_s = \mathbb{N}##
Also, initially when ##s=0##, we will have:
##A_s=\mathbb{N}##
##B_s=\phi##

We define ##A_s,B_s## by first defining ##T_s## in a general manner. The main property we want to have for this set is that whenever ##x \neq y## (both ordinals) then we have:
##T_x \cap T_y=\phi##
It will become clear why this property holds when we consider definition of ##T_s##, which we now describe. For any natural number ##x \in \mathbb{N}##, we say that ##x \in T_s## iff:
(1) ##\beta_s## is an admissible
(2) For the program (of one variable) with index ##x##:
--------(a) for all inputs ##<\beta_s##, the program halts with either ##0,1## or ##2## as output and in a time ##<\beta_s##
--------(b) it encodes a linear-order (on some subset ##S## of ##\beta_s##) but not a well-order (on ##S##)
(3) For all admissibles ##\alpha## less than ##\beta_s##, the following two conditions can't be satisifed simultaneously by the program with index ##x##:
--------(a) for all inputs ##<\alpha##, the program halts with either ##0,1## or ##2## as output and in a time ##<\alpha##
--------(b) it encodes a linear-order (on some subset ##S## of ##\alpha##) but not a well-order (on ##S##)

Now, for any stage ##s \in Ord##, let's give the definition of ##A_s,B_s## respectively. It was already mentioned that ##A_0=\mathbb{N}## and ##B_0=\phi##. So we need to cover the successor and limit cases for both ##A_s,B_s##. For any ##x \in Ord##, we define:
##A_{x+1}=A_x-T_x##
##A_x=\bigcap\limits_{i=0}^{i<x} A_{i}## (for limit values ##x##)
And similarly:
##B_{x+1}=B_x \cup T_x##
##B_x=\bigcup\limits_{i=0}^{i<x} B_{i}## (for limit values ##x##)

With these definitions given, let us consider what the function ##F:Ord \rightarrow Ord## given by our program actually calculates. The values of ##F(x)## (for ##x<\omega##) doesn't seem important so we might as well set:
##F(x)=0## for ##x<\omega##
But in the region given by the inequality ##\omega=\beta_0 \leq x < \beta_1## what is the value of ##F(x)##? To be able to do that, we will need to define a bit of further terminology. First let's use the symbol ##a_s(x)## to mean the ##x##-th element of ##A_s## (when the elements of ##A_s## are sorted in increasing order). Secondly, let us use symbol ##I_x(s)## to represent the order-type of initial well-ordered segment of the corresponding linear-order (on some ##S \subseteq \beta_s##) that the program (of one variable) with index ##x## computes. If the program with index ##x## doesn't encode any linear-order (for some ##S \subseteq \beta_s##), then by convention we set the value of ##I_x(s)## to be ##0##.

Now with the notation described in previous paragraph, we can write the definition of ##F(x)## in the region ##\omega=\beta_0 \leq x < \beta_1##:
##F(\beta_0+x)=I_{a_0(x)}(0)## ----- ##0 \leq x < \omega##
##F(\beta_0+x)=0## ----- ##\omega \leq x < \beta_1##
In fact, the way I want to define ##F## in general, a similar set of identities hold for any stage ##s##. That is, for any ##s \in Ord##, whenever ##\beta_s## is admissible, we want to define:
##F(\beta_s+x)=I_{a_s(x)}(s)## ----- ##0 \leq x < \omega##
##F(\beta_s+x)=0## ----- ##\omega \leq x < \beta_{s+1}##
If ##\beta_s## isn't admissible, we can simply define:
##F(\beta_s+x)=0## ----- ##0 \leq x < \beta_{s+1}##

We have considered the description function ##F##. Now we want to consider some aspects of the program that computes ##F##. When we have ##s=0##, the program checks all elements of ##A_0=\mathbb{N}##. Every any element ##x \in A_0##, it is checked whether the program with index ##x## represents a well-order on a subset of ##\mathbb{N}## or not (re-call that for any input ##<\beta_0=\omega## the program will halt in a time ##<\beta_0## with an output of either ##0,1## or ##2##). Those indexes that represent a linear-order (on some ##S \subseteq \beta_0##) but don't correspond to a well-order (on ##S##) are added to ##T_0##. Later on, this set ##T_0## will be subtracted from ##A_0## and added to ##B_0## (to form ##A_1## and ##B_1## respectively). When all indexes in ##A_0## have been checked for representing a well-order or not, at that point our program should have settled all values which are ##\geq \beta_0## and less than ##\beta_0+\omega##. After this the output for next ##\beta_1## inputs (from ##\geq \omega \cdot 2## to ##<\beta_1##) are set to ##0##. At this point the stage counter is increased by ##1## (hence ##s=1##) and all indexes in ##A_1## are checked (as to whether they represent a well-order on a subset of ##\beta_1## or not) in a similar manner as above.

Now this is of course a transfinite process (and perhaps a precise way would be to write a full program computing ##F##, which I might try later on). There are number of various points that need to be considered.
(1) First on some limit stage ##s##, if ##\beta_s## isn't admissible then we won't always check all indexes in ##A_s## (as to whether they represent a well-order on some subset of ##\beta_s## or not). That's because if ##\beta_s## isn't admissible then such a check is not necessary for our purposes. So we simply set ##F## to ##0## in the region from ##\geq \beta_s## to ##< \beta_{s+1}##. For example, when ##s=\omega##, because ##\beta_{\omega}## isn't admissible, we just set ##F(x)=0## in the region ##\beta_{\omega} \leq x < \beta_{\omega+1}##. Note that for us to be able to do it we will need to invoke the function ##isAdmissible(\beta_{\omega})##. And in general we invoke ##isAdmissible(\beta_{s})## and proceed based on whether true or false is returned.

If we get ##isAdmissible(\beta_{s})## false for some limit stage ##s##, we start searching for ##\beta_{s+1}=(\beta_s)^+## using the function ##isAdmissible## repeatedly.

(2) On stage ##s##, the general way we can check a given program (of one variable) with index ##x \in \mathbb{N}## represents a well-order (on some ##S \subseteq \beta_s##) or not is to first check whether it calculates a linear-order on ##S## or not. This is inexpensive and easy to check. Then we set some set (that varies with ##i \in Ord##) say ##X_0## to ##S##. We try to find the ##0##-th minimal element, then ##1##-st minimal element and in the ##i##-th iteration ##i##-th minimal element while also removing those value that fit in as minimal elements from ##X##. If the set ##X## turns empty on ##\alpha##-th iteration (that is ##X_\alpha=\phi##), then of course the given index describes a well-order on ##S## (with order-type ##\alpha##). If it happens that before ##X## turns empty, we encounter a stage ##i## such that ##X_i \neq \phi## and there is no ##i##-th minimal element, then while the given index/program describes a linear-order on ##S##, it doesn't describe a well-order.

This process was also described in post#17 (in a less general situation) and also in a number of posts before that (in this topic). One (relatively) small amendment we want to make in above procedure is that in the above procedure whenever we are consider ##i##-th minimal element, we also check for ##isAdmissible(i)##. So, for example, if we were at a stage ##s## (with admissible ##\beta_s##) checking for all indexes in ##A_s## for being well-orders on some subset of ##\beta_s##. Then, for a given index, if we reach the point where we are checking ##\beta_{s+1}##-th minimal element, we will be making a check ##isAdmissible(\beta_{s+1})##. Because this check will be true, we stop and return that our index doesn't encode a well-order. Re-call that the conclusion in previous sentence be true because in the "background-section" (in post#22) the given references mentioned that (for admissible ##\alpha##):
##|\alpha-computable|=|\alpha-recursive| \leq \alpha^+##

(3) If we encountered a stage ##s## where ##\beta_s=\alpha## is admissible and bad, then we will have:
##|\alpha-computable|< \alpha^+##
So, in an a priori sense, it is possible that we exhaust all indexes in ##B_s## without encountering ##\alpha^+##. Once again we will have to use the ##isAdmissible## function repeatedly to find ##\alpha^+##.

(4) When we reach a stage ##s \geq C##, then ##T_s## must always equal ##\phi##. If it wasn't true then this would give a method (for a program with no input) for halting beyond ##C##. That means that for ##s \geq C##, the sets ##A_s,B_s## never change (for example, no elements are ever removed from ##A_s##).
At this point if we encounter a stage ##p \geq C## such that ##\beta_p=q## is admissible and good, then when we try all indexes in ##B_p## it is given to us that none of them will be representing a linear-order (on some ##S \subseteq q##) but not a well-order (on ##S##). Since if that was the case that would mean that ##T_p \neq \phi##, which violates what was mentioned above about ##T_x=\phi## for all ##x \geq C##.

Furthermore, since ##q## is good, we have:
##|q-computable|=|q-recursive|=q^+##
Now if we observe the way ##F## was defined this means that the values ##F(x)## (from ##q \leq x < q+\omega##) can't be bounded by the value ##q^+##. Hence cofinality of ##F_{q+\omega}## should be ##q^+##.

(5) We also want to consider the running time of our program that calculates ##F##. Let's denote ##Time(x)## for the running time of our program on input ##x##. We will use the notation ##Time(<x)## (we will really only be interested in limit values ##x##) to mean:
##Time(<x)=sup\{Time(i)|i<x\}##

Assuming the program to be written in a reasonable manner, the following identities are generally true:
##Time(<\beta_s)<\beta_{s+1}## (for any non-limit stage ##s##)
##Time(<\beta_s)=\beta_{s}## (for any limit stage ##s##)

But for stage ##p## we can write stronger inequalities (because of ##T_p=\phi## and ##q=\beta_p## being good and admissible):
##Time(\beta_p+x)<\beta_{p+1}=q^+## (for all ##x<\omega##)
##Time(<\beta_p+\omega)=\beta_{p+1}=q^+##
which implies that ##F_{q+\omega}: q+\omega \rightarrow q^+## is ##q^+##-computable.
 
Last edited:
  • #25
It seems that I may have been a bit quick to put away the idea from post#15 to post#20. It definitely seems plausible to me (on some thought) that the "extended construction" that I talked about towards the end of post#20 (but didn't describe) may actually work. But I will definitely need to think about it more carefully though.
Edit: It appears that if it actually works, it might turn out to be even simpler than the idea for post#22 to post#24. So at least describing the main idea should turn out to be less time consuming (if it works) than I thought (when I first wrote this post). End

Anyway, I will not bump this thread regarding any detailed description of the "extended construction" (which builds upon from the ideas of post#15 to post#20 and extends upon them). Instead I will just try to think about it and then link to the document that describes the construction in quite a fair amount of detail (probably around in two or three weeks time ... give or take a week or two). This is in-case I can make the construction extended from the ideas in posts#15 to #20 to work.
Obviously the posts from #15 to #20 are fairly scattered (as I mentioned before). That document will be much more organized (and once again, any changes smaller than unfixable mistakes can be handled simply by editing/modifying that document ... while not changing the link).

There is a reason for this change. This is from second paragraph of post#21:
SSequence said:
This month, I took some time to think about the construction in last three posts (along with post#15,post#16). It was clear (after five or six days) that there was an assumption made by me that wasn't warranted by the one links I quoted. For Ref(B) in post#18, the equivalence of ##p##-partial computable and ##p##-partial recursive that I described was warranted only for admissible values in that reference.

So, quite briefly, one question is that could this construction be recovered. It do not whether it is possible or not (though this is not what I have been thinking about). For concreteness consider the ordinal ##p## which is the limit of ##\omega^{CK}_i## (where ##i \in \mathbb{N}##). Now ##p## isn't admissible. And when we draw out ##p##-partial computable functions, the equivalence to ##p##-partial recursive functions isn't warranted by Ref(B) in post#18. Since I don't know anything about ##\alpha## recursion theory, I can't comment on this anymore than that.
Now I still don't know anything about ##\alpha##-recursion (understanding it reasonably well would be a very long road).

However, I didn't notice a certain point while writing this (I wouldn't be surprised if I originally thought of it three months ago ... but when I started thinking about the "extended construction" again after two months it didn't occur to me immediately). For a specific example, take the value:
##\beta_\omega = sup\{\omega^{CK}_i | i<\omega \} ##
An important point is that just above the value ##\beta_\omega##, there should exist a point:
##\beta_\omega < \alpha <\beta_{\omega+1}##
such that we will have:
##|\alpha##-computable##|\geq \beta_{\omega+1}##

Why is this so? That's simply because below ##\beta_{\omega}## there are exactly ##\beta_{\omega}## number of halting/clocking positions [1]. What that means is that there is a copy of ##\beta_\omega## that will be available to the ##\alpha##-program.

However, because ##\alpha## will only be slightly so bigger than ##\beta_{\omega}##, it appears to me (unless I am mistaken) that actually the possibility:
##|\alpha##-computable##|> \beta_{\omega+1}##
shouldn't exist. If it did, then that would mean being able to clock right on top ##\beta_{\omega+1}## (something that is supposed to be impossible, even for fully general models [2]). And ruling out the previous inequality would mean that:
##|\alpha##-computable##|= \beta_{\omega+1}##

And the main worry I had about my argument was not being able to see clearly where we were going at each stage-##s##. Why this is important I will try to give a small hypothetical example. Suppose when we were at stage-##\omega## we reached ##\beta_\omega##. Now if, in the next stage suppose (complete hypothetically) we had:
##|\alpha##-computable##|= \beta_{\omega \cdot 2}##
Now from this equation we can't directly conclude anything about missing indexes (that's because ##\beta_{\omega \cdot 2}## isn't admissible) where as if we have (as is the case actually):
##|\alpha##-computable##|= \beta_{\omega +1}##
then we can conclude that there will have to be missing indexes.

Anyway, there definitely needs to be some work done here ... and many more points checked carefully (since the argument extended from post#15--post#20, while not too long, isn't that short either).

[1] While it is easy to show for a specific case, to be able to show that in general would be more complicated (and this is one of the important parts of the argument). Generally speaking, one expects that possibly missing indexes might be used.

[2] Note that the theorem towards the end of the paper (in OP) wouldn't suffice for this. Since we are considering fully general models, we would need such a theorem for that kind of model. However, as I have mentioned before (towards the end of post#18), I did find a reference for that.

=========================

I will not be bumping this thread any further except to give link to the documents related to the two arguments:
1---- post#22 to post#24
2---- argument extending the idea from post#15 to post#20

3---- OR possibly to point out an unfixable mistake in the above two arguments (any smaller issue, I would simply edit the linked document).
4---- Perhaps possibly posts for the sake of discussion related to the arguments (in-case someone is interested and asks/inquires regarding explanation or validity of a certain point). Anyway, I will probably like to complete these documents first anyway.

The first version of main document related to post#22 to post#24 would be completed and linked in few days (I would link it except if I find an unfixable mistake) and for post#15 to post#20 probably in a few weeks. Any of the later versions can just be modified with the same link.
[I may also post separately detailed implementations (for one or both arguments), once again as a link]

Apart from the addition of few important parts, this document is similar to post#22 to post#24 except that I have been more careful/precise about some points, where I was slightly careless in those posts.
 
Last edited:
  • #26
The link (v1.03) for the document pertaining to the argument from post#22 to post#24.

Few things to mention:
(1) The order of references is different from post#22.
(2) It isn't in TeX. I have noticed that people generally seem to find it easier in a more standardized format. I found some short introductions (about 20--30 pages long), so I might try to read the basics of it and convert this into a more formal typeset (in later versions) in next few weeks (without necessarily bumping the thread).

(3) I will try to upload later versions (possibly fixing smaller mistakes or adding some things for example) on the same link (if I can get it to work ... sigh) ---- instead of bumping the thread (which becomes unavoidable on different links because of the strict time limit on edit). If I can't get the same link thing to work, I suppose I will try a different service.

(4) If I should upload using some other service, let me know.

(5) For the other argument (post#15 to post#20), I will probably post it soon enough (again as a linked document) in few days. Next comes the task of converting to TeX ... and after that writing/linking detailed implementations. Of course that's assuming I don't recognize or find a big enough issue that I don't find the motivation to complete.

(6) Till now I haven't been able to find a substantial accuracy issue. But then again, only I have been looking at it. Would be good to ask logicians who have some experience with the topics at hand. I would probably want to convert it to TeX (for reasons mentioned in (2)) first though. People seem to find it easier to read.

Edit:
Sorry, I am having some difficulty using versions with same link. So I guess when I post the link for the argument concerning post#15 to post#20 ... there I will post links (to both arguments) using a different service (where I can understand how to use versions with the same link). For now, I will leave this link as it is.
 
Last edited:
  • #27
  • #28
(1) The first link is exactly the same as post#26 for now (same document and same version). But this may change in-case of newer version. I don't think there is much to add (except some possible updates to the links above ... but that doesn't require any bumping) ---- besides (possibly) adding links for implementations. Except for the possibility of finding an unfixable mistake (or one that really makes the document(s) irrelevant for example), I will probably keep these links.

(2) Conversion to TeX would take longer than I initially thought.

Anyway, it doesn't really change the argument though. So I think I will probably ask it as question sooner (after going through a number of iterations of going over the arguments in next week or two).

(3) The second link is the second argument I was talking about (taken from post#15---post#20, but with number of changes). Changes can be expected to it in later versions, as I wrote it in two days essentially (obviously not the main idea, but just the writing). So it can be improved.

In particular, I have a feeling that the last section can be improved. But I feel that the main idea is conveyed (to reasonable extent) in the sections before this.

(4) Regarding the implementations (for both the arguments) that I wrote about it in previous posts (say in point(5) in post#26). So one reasonable question is that how important the implementations might or might not be. I am just talking from point of view of correctness (as detailed implementation certainly ought to be instructive).

For the first link in the previous post, it doesn't seem that implementation would change the correctness of argument. The point/assumption that I mentioned within the document should be quite important though.

A similar remark holds for the second link ... but with one important exception. The argument essentially hinges on the point that with no missing index on some stage ##s##, it is possible to halt on ##\beta_{s+1}##. And this would essentially require setting-up a flag in a certain way. Since the whole argument hinges on this, an implementation might help to show this clearly (or well, if we assume it isn't possible ... trying to implement it one would stumble).

While I have little or no experience in drawing OTM diagrams*** (or possibly how to divide such programs neatly into submodules), as I mentioned in the linked document, it would be very surprising if setting-up the flag (as mentioned in above paragraph) wasn't possible.

*** My only experience is implementing the predecessor operation on a similar model that I thought of some years ago (the model I had thought of had resetting of pointer to 0-position as the basic operation ... well along with three types of states "read/write/move").
 
Last edited:
  • #29
A few remarks to conclude this thread. Note that below I am referring to "link1" and "link2" in post#27.

1----- First observe that there are quite a few mistakes in the posts from post#15 and further. Some of them are because I couldn't edit. Others are there because I was writing a lot of things in the process of learning.
I have a better (general) understanding of some of the basic ideas now (though I still need to learn a lot more things).

2----- After some more examination, the argument in "link2" is not correct. It has a logical mistake at one point (if anyone is interested, I would point it out). I didn't re-visit or examine it in detail after the first writing (hence a mistake is not too unexpected for me) ... since much of my focus was on "link1".

3----- I went through the argument in "link1" many times and I couldn't find any mistake (to the extent of my understanding). Now after understanding things better than before, I believe I understand the issue.

The main issue is that I was confusing the definition of ##\alpha##-computable with what could be called ##\alpha##-zeroComputable (##\alpha##-computation will all "parameters" set to ##0##). If we assume that ##\alpha##-computable and ##\alpha##-zeroComputable are equivalent, then the argument in "link1" is logically correct (to my understanding). However, they are not equivalent. This is why the conclusion I drawed out is not justified.

4----- So one point is can we still draw some kind of conclusion by modifying the argument in "link1". The answer, to the extent that I can see, is yes. The specific conclusion is:
"There is a bad ordinal ##\leq C##"

Hence a natural modification of the argument in "link1" leads us to conclude that bad ordinals must exist and the smallest one must be fairly small in comparison to ##C## (sup of clocks for OTM/ORM etc.). Though I think this is a well-known result and proven in recursion-theory texts (I have found it just by having quick look at some sections in some of the books). Though the method there might be much different from this one as I don't have any idea how it is proved there.

Here is a brief overview of the modification: Observe that at values of form ##\beta_x## (admissibles or limits of admissibles) where the number of clocking positions below ##\beta_x## are exactly equal to ##\beta_x##, the following two notions are equivalent (this is because natural numbers can be used to initialize any parameter in that case ... and the number of parameters is finite):
##\beta_x##-computable
##\beta_x##-zeroComputable
With this we can carry out much of our original argument in "link1" (with appropriate modifications) for positions ##\leq C## (however, the conclusion drawn will be different as I mentioned in previous paragraph).

5-----
I will remove "link2" soon enough.
I will modify "link1" to later-on modify the argument (and the conclusion) as I mentioned in point-4 (though I don't see any need to bump this thread for that).
 
Last edited:

1. What is a linear order?

A linear order is a mathematical concept that describes the arrangement of objects or elements in a sequence. It is also known as a total order, where every pair of elements in the set has a defined relationship of "less than" or "greater than".

2. How is a linear order different from a partial order?

A linear order is a type of partial order, but it has the additional property of being connected. This means that for any two elements in the set, one must be less than the other. In a partial order, there may be elements that are not comparable.

3. What is the importance of linear orders in mathematics?

Linear orders are important in mathematics because they provide a way to compare and order objects in a systematic way. They are used in various fields such as algebra, topology, and graph theory to study and solve problems.

4. Can a linear order be infinite?

Yes, a linear order can be infinite. In fact, many commonly used linear orders, such as the natural numbers or the real numbers, are infinite. However, there are also finite linear orders, such as the set {1, 2, 3} with the usual ordering.

5. How are linear orders represented?

Linear orders can be represented in various ways, depending on the context and the type of objects being ordered. In mathematics, they can be represented using sets, graphs, or symbols such as < and >. In computer science, they can be represented using data structures such as arrays or linked lists.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
15
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
27
Views
3K
Replies
1
Views
923
  • Set Theory, Logic, Probability, Statistics
Replies
18
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
912
  • Thermodynamics
Replies
7
Views
1K
  • General Math
Replies
0
Views
811
Back
Top