Stabilizations at countable levels

  • A
  • Thread starter SSequence
  • Start date
  • Tags
    Levels
In summary, a program eventually writable stabilizes in code for an ordinal, never to change again. Eventually markable ordinals are those for which the given program eventually points to the ordinal. Eventually pointable ordinals are those for which the given program never decreases the value of its variable.
  • #1
SSequence
552
94
Background:
This is going to be a shorter thread compared to the last thread I made, which was quite long. It is related to a question I posted on MSE/mathoverflow few weeks ago (https://math.stackexchange.com/questions/3403361/eventual-writability-general). The linked thread also contains the link to mathoverflow version. For the notion of EW-real see definition-3.10 (https://arxiv.org/abs/1512.06101). Note that the context in which the definition is posted is quite advanced, but the definition itself is fairly simple to understand.

The stabilization time is the smallest time at which the output stabilizes. To summarize, what I essentially posted there was that stabilization times for every program can't be ##<\omega^L_1##. In other words, the supremum of stabilization times must be greater than ##\omega^L_1##. The answers on both sites confirmed that there must be programs with stabilization times ##
\geq \omega^L_1##. A simpler way to see it (than I described) was given in the answer to mathoverflow version (the last paragraph of the answer there).

For the rest of the post I will write "eventually writable" as EW. Also, "code for ordinal ##\alpha##" means "a well-order of ##\mathbb{N}## (in suitably encoded form) with order-type ##\alpha##".

Some Context:
There are two notions in which ordinals can be naturally related to stabilization:
(1) The output section of the program stabilizes with a code for an ordinal, never to change again. Note that this is the definition for an EW-ordinal. That is, an ordinal ##\alpha## is EW iff there exists a program whose output section stabilizes while containing a code for the ordinal ##\alpha##. In this case we say ##\alpha## is an EW-ordinal.
(2) There is a variable (say ##v##) in a program which stabilizes permanently to ##\alpha## (never to change its value again). We say that ##\alpha## is eventually markable ordinal. Or, in other words, we can say that the given program eventually marks ##\alpha##.

Few points to note before proceeding:
(i) The programs always start completely blank (no input or parameters etc.).
(ii) In point-(1) a program may stabilize permanently without containing a code for an ordinal. In that case still the real number that stabilizes on the output is an EW-real.
(iii) The definition given in point-(2) above might seem to be specific to computational model. But it isn't too difficult to give an equivalent definition in terms of OTMs (see the answer to MO version of the question for example).

I want to discuss a notion which is more strict than the notion of eventually markable.
(3) There is a variable (say ##v##) in a program which stabilizes permanently to ##\alpha## (never to change its value again) without the variable ##v## ever descreasing. In this case we say that ##\alpha## is eventually pointable ordinal. Or, in other words, we can say that the given program eventually points to ##\alpha##.
For this definition, we may suitably restrict ourselves to program where there is no command decreasing the value of ##v##.

Summary:
Let ##\eta## be sup of EW-ordinals. Let ##\eta_0## be sup of those ordinals whose code has eventually stabilized on the output at some time ##<\omega^L_1##. Obviously we have ##\eta_0<\eta##. Another thing to note is that the sup of markable ordinal which have eventually stabilized on the output at some time ##<\omega^L_1## is also ##\eta_0##. Generally the sup of markable ordinals will be well-beyond ##\omega^L_1## (as was noted in linked threads).

Now one immediate question is that what is the sup of those eventually pointable ordinals which are countable (note that sup of pointable ordinals isn't countable because ##\omega^L_1## can be shown to be pointable). It can be shown that this is ##\eta_0## too. The next question is that whether the pointable ordinals below ##\eta_0## have jumps/gaps or not? In other words, is there an ordinal ##<\eta_0## which isn't pointable. I think it can be shown that, in fact, there are gaps in the countable pointable ordinals. And it shouldn't be difficult to show.

For this, we want to define a notion of ##C_i## (with ##i \in Ord##). Define ##C_0## be to be sup of clockable values. Define ##C_{i+1}## (from##C_i##) to be sup of ordinals that are clockable given arbitrary parameters ##\leq C_i##. For limit values ##j## define ##C_j## as:
##C_j=sup\{C_i|i<j\}##

With this we can show the following:
---- ##C_{\omega^L_1}=\omega^L_1##
---- ##\eta_0## is one of the ##C_i##'s. In other words, there exists a ##q<\omega^L_1## such that ##\eta_0=C_q##
---- All stabilizations for EW-ordinals that occur in countable time in fact occur below ##\eta_0## time (and actually, the supremum of the stabilization times for these specific stabilizations is ##\eta_0##). So, in that sense, the initial comment on the MSE answer is right.

---- The set of pointable ordinals below ##\eta_0## have jumps/gaps.

P.S.
Now the last point isn't problematic. After all there are gaps in clocking/halting positions too (on empty input). However, there is something that is strange for me here specifically. I will describe it in the next post.
 
Last edited:
Physics news on Phys.org
  • #2
Edit:
Removed the incorrect statement (though whatever I wrote in first/original post should be fine).

I made a number of other interesting observations after writing this thread (though they are relatively simple). I might describe them in later posts (probably in few days).

Edit2:
Let's focus on jumps/gaps in the set of pointable values below ##\eta_0##. Note that we can think of a gap as an interval of the form ##[\alpha,\beta)##. The length can be defined as the unique ordinal ##l## such that:
##\alpha+l=\beta##

However, I want to use a different definition for length. Let's first define a region ##R_i## (with ##i \in Ord##) as the set:
##R_i=\{x| C_i \leq x<C_{i+1} \}##

Now the length can be defined as the "number" of regions which the given gap/jump passes through. To be more precise, if the given gap is ##[\alpha,\beta)## with ##\alpha \in R_i## and ##\beta \in R_j##, then define the length to be unique ordinal ##l## such that:
##i+l=j##

===============

Two main questions that I have been thinking about since last edit:
(1) Does a gap of every possible length ##<\eta_0## occur below ##\eta_0##?
(2) If the answer to (1) is true, then is it true that for all ##x_1<x_2## the smallest position at which the gap of length ##x_1## occurs (the first time) will be smaller than the smallest position at which the gap of length ##x_2## occurs (the first time)?

If the answer to (1) is positive then it seems that the answer to (2) should be negative.

Though (2) being false does seem slightly odd to me, but I am not certain. I will need to think a bit more to know whether there is some actual point here or whether I am not imagining things correctly.
 
Last edited:
  • #3
SSequence said:
However, I want to use a different definition for length. Let's first define a region ##R_i## (with ##i \in Ord##) as the set:
##R_i=\{x| C_i \leq x<C_{i+1} \}##

Now the length can be defined as the "number" of regions which the given gap/jump passes through. To be more precise, if the given gap is ##[\alpha,\beta)## with ##\alpha \in R_i## and ##\beta \in R_j##, then define the length to be unique ordinal ##l## such that:
##i+l=j##
This def. needs a small fixing for the boundary case (otherwise it will have off by 1 error). It should be easy though. The intention is to count the number of regions with which a given gap intersects.
 
  • #4
Finally I would briefly describe one other alternative line of thinking (following from OP but in slightly different direction from post#2,#3). This is actually something I have been thinking about for a week or so (after the answer to MO version of the question).

Let's try to look closely at the very first jump/gap in the pointable values. It can be shown that:
---- The gap is of the form ##[C_k,m)## with ##m<C_{k+1}##. In other words, the gap starts at one of the ##C_i##'s and has length ##0##.

Now let's look at all the programs at run-time ##C_k## (and the values of the variable ##v## in them). For a program with index ##e \in \mathbb{N}##, denote the value of variable ##v## at time ##t## as ##v_e(t)##. Consider those programs for which:
##v_e(C_k)=C_k##
Or, in other words, consider the set ##A \subset \mathbb{N}## such that:
##A=\{e \in \mathbb{N}|v_e(C_k)=C_k\}##

----- Interestingly, if we assume ##m## to be reasonably closed, it can be shown that for all indexes ##e \in A## we have: ##v_e(m)=m##. It can also be shown that ##m## is the smallest value greater than ##C_k## for which the previous equality holds (for all indexes in ##A##).The last statement may seem peculiar and not that obvious. I showed it by drawing figures (along with some reasoning) for a sizeable number of cases (about 6 or 7). Note that this last statement substantially reduces the possibilities with regards to might actually be happening at the occurrence of first jump/gap in pointable values.
My idea with this was that we can combine the last statement I described above with the following observation:
"If a given ##x \neq C_i## (for any ##i \in Ord##), then this can be verified (w.r.t. our programs)."

Note that for all values ##C_i## (with ##i<k##) among all those programs with index ##e## for which ##v_e(C_i)=C_i## some of them will be left behind and will keep pointing to ##C_i## (because, by assumption, ##C_i## with ##i<k## must be pointable). So ##C_k## is unique in this regard.

So the main thing for me is whether the statements mentioned in this post imply a genuine issue or not (in which case it would just be programs behaving in a very specific way).

Note:
I have mentioned a lot of points without considering the details (even though they should generally be simple). This has been to avoid the posts from getting too long as to be unreadable. I think I can justify much, if not all, of what I have written (except where there maybe a slip-up or a possible mistake ... which I don't rule-out for some of the more complex statements ).

If it does occur to me that I made a mistake somewhere, I will point it out.

Other than that, if I do happen to get the time, I will simply link the details in a separate single post (rather than making multiple posts).
 
Last edited:
  • #5
Adding a brief sketch about a certain idea. The basic point (but not any details) of this I was thinking about last month. And that is about using some kind of cardinality argument. Now before going into further details, several ideas (cardinality or not) of the kind that I have been considering generally don't seem to work for two reasons:
(i) In the high-level picture that one is trying to imagine (ignoring the lower level details) the given idea is supposed to work. However, it doesn't actually work because when we try to consider the complete picture (including lower-level details) the given construction stops well-before the point that one is actually considering in high-level picture. This is because the same phenomenon is being repeated indisinguishably many times at smaller points.

(ii) Once one fixes the construction in (i) to also account for lower-level picture [so the construction can actually reach the point that we were considering] the given construction fails to work at the original point that one was originally imagining (due to the modifications made to the construction).Admittedly, what I wrote just above is vague without context. I could use some examples to describe it better, but it would get too long. But since it seems to be recurring/common phenomenon I thought I would mention this.

=================

P.S.
At any rate, I will briefly sketch the construction/idea I thought of (at some basic level of detail) in a day or so. My pc is overheating very quickly so it might take some more time than actually needed.
 
  • #6
Construction axiom is assumed so ##\omega_1=\omega^L_1##, ##\omega_2=\omega^L_2## and so on...

An indexing of programs that eventually point (the notion mentioned in post#1) to some (ordinal) value is assumed. Some notation that is used in what follows:
##y_e(t)## ---- the program with index ##e \in \mathbb{N}## is pointing towards ##y_e(t)## at som time ##t \in Ord##
##Y_e## ---- the program with index ##e \in \mathbb{N}## eventually points to ##Y_e##

Now defining the notion of a configuration [I couldn't think of a better alternative term except "snapshot"]. We want to think of a function ##\mathrm{conf}:Ord \rightarrow \mathbb{R}##. Now ##\mathrm{conf}(t)## is defined as a real number suitably encoding the function ##lessequals: \mathbb{N}^2 \rightarrow \{0,1\}##, which in turn is defined by the following conditions [obviously ##a,b \in \mathbb{N}##]:
##lessequals(a,b)=1## iff ##y_a(t) \leq y_b(t)##
##lessequals(a,b)=0## iff ##y_a(t) > y_b(t)##

Furthermore we want to define (ordinal) values ##p_i## for an arbitrary ##i \in Ord##. We have ##p_0=\omega_2##. Further, ##p_{i+1}## is defined as the supremum of all those times when any program that was (temporarily) pointing to ##p_i## moves to ##p_i+1##. To put it symbolically:
##p_{i+1}=sup\{t \in Ord \,\,\, | \,\,\, \exists e \in \mathbb{N} \,\,\, [y_e(t)=p_i \wedge y_e(t+1)=p_i+1] \,\}##

If ##j \in Ord## is a limit then, as might be expected, we simply define ##p_j## as supremum of all ##p_i##'s (where ##i<j##).

==================

Now with this notation in place, let's try to get to the idea. We can enumerate all (constructible) subsets of ##\omega## and use these to evenutally point to ##\omega_1##. Of course this was the part of main linked post in post#1. I think that [in a somewhat similar but slightly more complicated way] we can also enumerate all (constructible) subsets of ##\omega_1## and use these to evenutally point to ##\omega_2##.

Essentially, what I am concerned with is smallest value ##m \in Ord## such that ##p_m## is not eventually pointable (by any program). To show that such an ##m## exists we would first need to show that ##p_{\omega_1}## exists. Then, because the indexes are merely natural numbers (and hence countable) there must exist an ##m<\omega_1## such that ##p_m## is not eventually pointable.

The main reason why ##m## is of interest is because for each ##i<m## there exists a program that eventually points to a value ##<p_i+\omega_2##. For example, consider the subset of real nos.:
##A_0=\{\mathrm{conf}(t)|t \geq \omega_2 \}##
Using this set ##A_0## we have a program that eventually points to a value ##<p_0+\omega_2## (cardinality argument). Something similar can be done (for each ##i<m##) by defining ##A_i=\{\mathrm{conf}(t)|t \geq p_i \}##. We necessarily have to use the fact that each ##p_i## (##i<m##) is eventually pointable.

Let's also consider a function ##f:Ord \rightarrow Ord##. The value ##f(i)## is defined as the "number of times" (ordinal count) a program moves from pointing towards ##p_i## to ##p_{i+1}##. Since ##f(x)## will always be countable, we might as well write ##f:Ord \rightarrow \omega_1##.

Now consider the various sub-statements that need to be considered. If they are correct, then their difficulty should be manageable [even the harder ones, a few are obviously correct and easy].
(0) ##p_i## exists for all ##i \in Ord## (more specifically ##p_{\omega_1}## exists)
(1) For each ##i \leq m##, all ##e \in \mathbb{N}## for which:
##Y_e \geq p_i##
All the programs that satisfy the above equality have their pointing variable to ##p_i## at exactly the same time (simultaneously).

(2) For each ##i \leq m##, ##p_i## is a limit
(3) For each ##i \leq m##, ##f(i)## is a limit
(4) For each ##i \leq m##, ##p_i## is a multiple of ##\omega_2## (that is, ##p_i=\omega_2 \cdot \alpha## for some ordinal ##\alpha##)

(5) There exists a program with index ##a \in \mathbb{N}## such that:
##p_m < Y_a < p_{m+1}##

(6) Given that there exists a program with index ##a \in \mathbb{N}## as in previous sub-statement, then there exists a program with index ##b \in \mathbb{N}## such that:
##Y_b = p_{m}##

==================

To show sub-statement(5) we want to use the following:
----- sub-statement(3) above
----- the statement discussed in text of this post: "for each ##i<m## there exists a program that eventually points to a value ##<p_i+\omega_2##"
----- We also need to use the observation ##f(p_m)## must be countable and hence has countable co-finality (in contrast to ##p_m+\omega_2##)

To show sub-statement(6) we want to use the following:
----- sub-statement(5)
----- sub-statement(4)

Further for sub-statement(6) I think we want to two divide into two separate sub-cases and consider them independently:
(a) All programs that (temporarily) point to ##p_m## will eventually point to a value ##\geq p_m+\omega_2##.
(b) Not all programs that (temporarily) point to ##p_m## will eventually point to a value ##\geq p_m+\omega_2##. In other words, there exists a program that eventually points to a value ##> p_m## but ##<p_m+\omega_2##.
 
Last edited:
  • #7
I haven't been able to think of valid reasoning leading to Proposition-(5) in previous post(#6). My initial feeling was that the following could be used:
SSequence said:
----- We also need to use the observation ##f(p_m)## must be countable and hence has countable co-finality (in contrast to ##p_m+\omega_2##)
But I wasn't right about this because I couldn't make its use (for proposition-(5)). Because this is an important step, the complete sequence of reasoning doesn't hold. The rest of the statements in previous post should be OK though I think, including also the statement (based on cardinality argument) that there exists a program which eventually points to a value ##<p_0+\omega_2##.

Another thing is that if we define an ordinal ##Q## which is the supremum of the times such that each configuration in the set: ##\{conf(x)|x \geq \omega_2 \}## occurs at least once (above ##\omega_2##), then we can say a number of things about ##Q##. For example, it seems that we should have ##Q<p_{\omega_2}##.

====================================

I have thought of a few more things, which I will try to describe. Consider:
##B_0=\{conf(x)|x \geq \omega_1 \}##
##p_0=\omega_1## with successive ##p_i##'s in general defined as in previous post.
##Q_0=\omega_1##
##Q_1=Q## defined as supremum of the times such that each configuration in the set ##B_0## has occurred at least once (after ##Q_0##)

Also, let's define ##c(\alpha)## as the supremum of halting positions [for a powerful enough model] given a single parameter ##\alpha##

Now we have:
------ Whenever ##p_\alpha## is pointable then ##c(\alpha)=p_{\alpha+1}##. It seems that one could justify this using proposition-(1) and proposition-(3) in previous post#6.

------ ##Q## is pointable. This is fairly obvious.
 
Last edited:
  • #8
One other thing that I was thinking today was about "halting map". Given a model in which we have a single parameter ##x##, we can think of the halting map as a function ##f:\alpha \rightarrow \mathbb{N}##. Here ##\alpha<\omega_1## is the "total count" of all halting positions. For some ##x<\alpha##, ##f(x)## gives the smallest index halting at the "##x##-th" position.

Now consider ##Q## defined towards in post#7 just above. Suppose we have ##Q=p_b## for some ##b \in Ord##. Now because ##Q## is pointable we should have ##c(Q)=c(p_b)=p_{b+1}##. Now, by definition of ##Q##, it should be true that there exists some ##N<Q## such that:
##conf(N)=conf(Q)##

Let's denote the halting maps corresponding to parameters ##Q## and ##N## as ##h_1:\alpha \rightarrow \mathbb{N}## and ##h_2:\beta \rightarrow \mathbb{N}## respectively. Apparently, it seems to me that we should ask whether it is possible that the function ##h_2=h## [where ##\alpha \geq \beta## and ##h:\beta \rightarrow \mathbb{N}## is restriction of the function ##h_1## to domain ##\beta##]. The answer to above seems to be in negative apparently? Note that because ##conf(N)=conf(Q)## (with ##N<Q##), there exists a certain program which halts for parameter ##N## but not ##Q##. That's because (after time ##Q##) I think ##conf(Q)## doesn't repeat itself again [because of some programs eventually pointing towards ##Q##].

But the answer to above question (in previous paragraph) shouldn't be in negative ... I think? Suppose the index ##e \in \mathbb{N}## eventually points to ##Q##. Then we can have indexes ##e_i## (where ##i \in \mathbb{N}##) such that ##e_i##'s try to re-create the halting map above the position where ##e## is pointing (whenever ##e## stops changing its positions ... temporarily or permanently). On the face of it, I am trying to understand that if the answer to question in previous paragraph is negative then where does ##conf(p_{b+1})## occur below ##Q##?
 
Last edited:
  • #9
So I have spent number of hours thinking about the previous post#8 and writing the classification of various cases along the way. The way post#8 is written, it might be difficult to parse [since I had just thought about the idea and hadn't tried to write any details]. The idea looks interesting as I haven't been able to see an issue with it until now (after spending a few hours on writing basic details w.r.t. classification of cases that need to be considered) ... but idk. Anyway, I will try to describe the idea in post#8 in a somewhat neater way.

One thing that seems to me (unless I am missing something) is that where as in post#8 I used ##p_0=\omega_1##, it seems possible to apply the same idea with:
##p_0=0##
##Q=\omega_1##

Since there is a lot of text in the thread, it seems quite reasonable to briefly list what is required to understand what is written below (in this post).
Background:
- The OP (post#1).
- Beginning part of post#6
- propositions-(0),(1),(3) from post#6

Description:
First some notation that has to be used for the description. ##Comp(=\alpha)## for a sufficiently general (infinite) computational model which can use the single parameter ##\alpha##. Denoting ##c(\alpha)## for the supremum of halting positions of such a model. Further let ##count(\alpha)<\omega_1## denote the number of halting postions for the given computational model ##Comp(=\alpha)##.

Further let's use a function ##h_\alpha : \mathbb{N} \rightarrow count(\alpha)## to describe the "halting map" for the programs ##Comp(=\alpha)##. That is, ##h(x)## is defined iff the program with index ##x## halts. Furthermore, the program with index ##x## must halt at ##h(x)##-th position [this is count of the halting positions of the programs, which will be countable because ##Comp(=\alpha)## has countable number of programs].

Now here are few statements (details omitted for brevity) that should hold [some of them require propositions from post#6 which I mentioned in "background"]:
----- ##p_{\omega_1}=\omega_1##
----- With ##\eta_0## described in OP, it should be true that ##\eta_0=p_b## (for some ##b<\omega_1##)
----- ##conf(\omega_1)=conf(\eta_0)##
----- For all ##i \in Ord##, ##c(p_i) \geq p_{i+1}##
----- For all ##i,\alpha \in Ord##, if ##p_i < \alpha < p_{i+1}## then ##c(\alpha) \geq p_{i+1}##

----- For all ##i \in Ord##, ##p_i##'s can be assumed to be sufficiently closed. For example, for any arbitrary ##\alpha## if we have ##p_i \leq \alpha < p_{i+1}## then ##\alpha^+< p_{i+1}##

Main Point:
Now the main point I am trying to consider is that where does ##conf(c(\omega_1))## occur below ##\omega_1##. For this I tried to divide into a number of sub-cases. First, let ##e \in \mathbb{N}## correspond to index of some program which evenutally stabilizes at ##\omega_1##. That is, ##Y_e=\omega_1##

(A) Can ##conf(c(\omega_1))## occur at some point ##\alpha<\omega_1##, where we have ##y_e(\alpha) = p_i## (for some ##i<\omega_1##).
(B) Can ##conf(c(\omega_1))## occur at some point ##\alpha<\omega_1##, where we have ##y_e(\alpha) \neq p_i## (for some ##i<\omega_1##).

As mentioned in beginning of post#6, ##y_e(t)## denotes the value that program with index ##e \in \mathbb{N}## is pointing to at time ##t \in Ord##.

For both (A) and (B), we divide into further sub-cases. First considering sub-cases of possibility (A).
(a) ##count(p_i)<count(\omega_1)##
(b) ##count(p_i)=count(\omega_1)##
(i) The halting maps (for ##p_i## and ##\omega_1##) are equal
(ii) The halting maps (for ##p_i## and ##\omega_1##) are not equal

(c) ##count(p_i)>count(\omega_1)##
(i) The halting maps (for ##p_i## and ##\omega_1##) agree exactly till ##count(\omega_1)##
(ii) The halting maps (for ##p_i## and ##\omega_1##) do not agree exactly till ##count(\omega_1)##

Now the idea is to have a series of indexes ##e_i## (where ##i \in \mathbb{N}##) so that whenever the program with index ##e## stops changing its pointing variable (temporarily or permanently) these programs ##e_i## build a version of halting map right after it. For example, suppose program ##e## stopped changing its value at some time ##t## for a long enough duration. Then, if the stopping duration is long enough, we will get ##y_{e_i}## (position where ##e_i## is pointing to) as equal to ##y_e(t)+h(i)## (for an appropriate ##i##). Assuming ##T>t## to be sufficiently smaller than the time when ##y_e## starts changing again, and also assuming that program ##i## from computational model ##Comp(=t)## halts, we will have:
##y_e(T)=y_e(t)##
##y_{e_i}(T)=y_e(t)+h(i)##

Now with this, classifying the possibilities:
(A)(a) No need to worry (that is, ##conf(\alpha) \neq conf(c(\omega_1))## due to build-up of halting maps described in previous paragraph)
(A)(b)(i) Can't occur
(A)(b)(ii) No need to worry (same as (A)(a))

(A)(c)(i) Can be ruled out via a halting argument. To be more precise, if this was possibility could occur then a program from model ##Comp(=\omega_1)## can halt at ##c(\omega_1)##.

(A)(c)(ii) No need to worry (same as (A)(a))

============================

Essentially a similar classification as above needs to be applied for sub-possibilities of (B)
 
Last edited:
  • #10
Examining an important point more closely:

From post#6:
SSequence said:
(1) For each ##i \leq m##i≤m, all ##e \in \mathbb{N}##e∈N for which: ##Y_e \geq p_i##
All the programs that satisfy the above equality have their pointing variable to ##p_i## at exactly the same time (simultaneously).

Rewriting it as:
SSequence said:
(1) For each ##i \in Ord##, all ##e \in \mathbb{N}## for which: ##Y_e \geq p_i##
All the programs that satisfy the above equality have their pointing variable to ##p_i## at exactly the same time (simultaneously).

I have given some more thought to it and I think I can't justify it because I initially overlooked a slightly subtle possibility (which needs to be ruled-out to make an unqualified statement like in quotes). It is a bit unfortunate since I tried that statements that I couldn't justify don't enter the thread (since if they are wrong it creates confusion).

===================

Now obviously this statement should be true for many ordinals (that are closed enough). But my understanding is that more important is whether there are consecutive regions of ##p_i##'s where the quoted statement is true or not (and what those regions might be). My initial understanding is that there should be large (consecutive) regions above ##i \geq \omega_1## where the quoted statement should be true.

====================

But now an important question is how does this affect what is written in post#9 (since there I was assuming the above statement to be true for all ##i \in Ord##). After some thought, it seems to me that there are two possibilities to look at here:
(1)
Don't much change from post#9 but instead of doing that just change the possibilities (A) and (B) in post#9 from:
(A) ##y_e(\alpha)=p_i## for some ordinal ##i##
(B) ##y_e(\alpha) \neq p_i## for some ordinal ##i##

to:

(A) ##y_e(\alpha)=p_{\omega \cdot i}## for some ordinal ##i##
(B) ##y_e(\alpha) \neq p_{\omega \cdot i}## for some ordinal ##i##

I will think about this possibility first as it is much easier to try and (if it works) then only a few things, if any, from the main argument of post#8 would need to be changed. The reason for the plausibility of this working is that because we are interested in ##conf(c(\omega_1))##, the smallest ##\alpha## for which ##conf(\alpha)=conf(c(\omega_1))## is true should be greater than ##\eta_0##. And because ##\eta_0## is highly closed (e.g. see the OP) we are probably still going to have:
##conf(\omega_1)=conf(\eta_0)=conf(p_b)=conf(p_{b+\omega \cdot i})## (for some ordinal ##b## and for all ##i<\omega_1##)(2)
As I mentioned, trying to think of consecutive regions of ##p_i##'s where the quoted statement (in the beginning of the post) is true. It is likely that number of details would need to be changed from post#9. Also, it might be beneficial to take ##p_0=\omega_1## instead of ##p_0=0## (as in post#9) ... but that's slightly tentative.

The basic idea would still be the same, which is:
(i) considering the occurrence of ##conf(c(Q))## below ##Q##
(ii) Looking at the halting map corresponding to ##Comp(=Q)## (computational model with a single parameter ##Q##).

Because possibility-(2) is definitely much more complicated than possibility-(1), I will first try to look at possibility-(1).
 
Last edited:
  • #11
There might be a few inaccurate statements from post#6 and onwards

So what I mentioned in post#9 has some issues. The main problem seems to be the possibility (A)(c)(i). It can't be ruled out because of repetition of ##conf(Q)## indistinguisably many times (below ##Q##).

===============

Upon thinking about halting maps I was thinking whether it might be useful to think of other functions with domains as ordinals. Defining something like:
##Q_0=\omega_4##
##A_i=\{conf(x)|x\geq Q_i\}## (for all ordinals ##i##)

##Q_{i+1}## as the supremum of times when all elements in ##A_i## have occurred at least once
##Q_j## as the supremum of ##Q_i##'s (with ##i<j##)

For any arbitrary ##i<j## we will have ##A_i \supseteq A_j##.

Let ##count(i)<\omega_2## [the inequality follows because of cardinality of real numbers being ##\omega_1##] denote the counts of "new" configurations starting from ordinal ##Q_i##. Now with each value ##i## we can associate a function ##f:count(i) \rightarrow A_i## with ##f(x)## having the obvious meaning that this is the configuration that was encountered (as a new one beyond ##Q_i##) exactly the ##x##-th time .

Now considering the cardinality of all functions ##f: \alpha \rightarrow \mathbb{R}## (with ##\alpha<\omega_2##) as ##\omega_X##, we want to start from ##Q_0=\omega_N## (where ##N \geq X##). Now we want a program that can reach a point ##Q_{i+1}## (##i<\omega_{X+1}##) such that we have ##f_i=f_{i+1}##. If we can't reach such a point, then we want to reach a point ##Q_{i+1}## where the function ##f_{i+1}## is (strictly) a restriction of the function ##f_i##.

===============

We want to make a program which stays constant for a very long time and then moves. Now, somewhat roughly speaking, I want to consider a scenario which is something like:
##conf(Q_i)=conf(Q_{i+1})=conf(Q_{i+2})##

with ##f_{i+1}## being a (strict) restriction of the function ##f_i## [also note that ##range(f_{i+1})=A_{i+1} \subseteq range(f_i)=A_i##]. One other thing is that we might have ##\alpha < Q_{i+2}## such that ##conf(\alpha)=conf(Q_{i+1})## [but, based on "informal" intuition, that shouldn't be an issue perhaps since it still might be too large of a jump].

So we have something like a program which, at some given step ##n \in Ord##, tries to "estimate/guess" ##Q_i##'s "locally" (using ##n##). Some of its estimates will generally be wrong but if ##Q_x < n <Q_{x+1}## then all the estimates till ##Q_j## (with ##j \leq x##) will be accurate. At step ##n##, denoting the set of ordinals serving as estimates ##S(n)##.

Now supposing our program has index ##e## we want our program to change its pointing value (after some step ##n##) at some step ##N>n## under three conditions :
----- The set ##S(N)## no longer contains the value being pointed to by our program at step ##n##
----- ##conf(N)## is same as configuration correpsonding to the value being pointed by our program
----- The "estimated" function [calculated using estimations of ##Q_i##'s] ##f## [assuming ##n=Q_{i+1}##, we will have this function as ##f=f_i##] disagrees from newly generated function ##g## (which is formed/based/built-up upon configurations encountered after the currently pointed value).

In all the above three cases we want our program to increase its pointing variable to ##N##.

===========================================================================At any rate, if (1) and (2) below are both true then I won't be bumping this thread any further [i.e. bumping means one of the below is false]:
(1) If it is reasonably clear to me that I can't make the idea described in this post work (not bumping means that this can taken to be understood).
(2) I don't have any further ideas (or interesting points) to add
 
Last edited:
  • #12
So I think that the details in second half of previous post#11 definitely has very clear issues. At any rate, the main idea of using cardinality of ##f_i##'s [by employing the ordinal ##\omega_X## as in first-half of post#11] seems to hold fair amount of promise (but nevertheless, it is tentative). The main thing is that [to use cardinality of ##f_i##'s in some way] we have to use a condition which (at any arbitrary point ##j##) consider all ##f_i##'s (with ##j<i##). This is quite different from second-half of post#11.

SSequence said:
Now supposing our program has index e we want our program to change its pointing value (after some step n) at some step N>n under three conditions :
a----- The set S(N) no longer contains the value being pointed to by our program at step n
b----- conf(N) is same as configuration correpsonding to the value being pointed by our program
c----- The "estimated" function [calculated using estimations of Qi's] f [assuming n=Qi+1, we will have this function as f=fi] disagrees from newly generated function g (which is formed/based/built-up upon configurations encountered after the currently pointed value).
The point is that if we consider ##\omega_X<\gamma=Q_k<\omega_{X+1}## as sup of all ##Q_i##'s such that all ##f_i##'s have occurred below ##\gamma## at least once then it isn't difficult to devise a condition [via comparison with "correct estimates" of all previous ##f_i##'s at ##\gamma##] previous where if a program's pointing variable reaches ##\gamma## then it stays constant for a long time [longer than should be allowed] and then moves. This essentially corresponds to condition-(c) in quote from previous post. But condition-(c) doesn't seem to work. We want to replace it with a kind of condition just mentioned.

The main problem is that (as usual) there are two conflicting demands that we have to balance out. We want to place auxiliary conditions such that:
---- we reach ##\gamma## in the first place
---- if we reach ##\gamma## then our program will still be holding its pointing variable as constant for long enough

So obviously, we would necessarily want to have a suitable variant of condition-(c) mentioned above, since that's the main idea in the first place. For auxiliary conditions, it seems that we would quite likely want to have condition-(a) to move the pointing variable through non-##Q_i## points [but now, we would need to consider what would happen (generally) if the pointing variable is at a ##Q_i## point]. It seems that we might also need condition-(b), and perhaps some other condition or two depending on details.

============================================================================

Since a lot of this depends on details of how it works out (or not), which I have to consider yet ... it wouldn't be suitable to write a long post here.

What I mentioned was something that seemed quite important to clear-up, so I mentioned it briefly. But whatever I wrote towards the end of last post still applies.
 
  • #13
Now having considered ideas similar to previous two posts for some time, I think I can write some comments related to these. Then I will try to describe what I think might address the specific issues. However, I will not try to do that here (the addressing of issues). Instead I will just link to documents that describe (initially) the basic sketch.

There are number of points to consider with regards to previous two posts.
[[1]] Firstly, it seems to me that there is some choice in what we want to define as a "region" ##R_i## (##i \in Ord##). For example, we might define:
##R_i=\{x\in Ord \,\,|\,\, Q_i \leq x<Q_{i+1}\}##
But it seems it might be easier to work with ##p_i##'s. Hence we might perhaps take:
##R_i=\{x\in Ord \,\,|\,\, p_i \leq x<p_{i+1}\}##

[[2]] At the beginning of post#10, I mentioned that I had some trouble justifying statement(1) from post#6. Now having thought about it some more, it seems it can be justified in generality. I have thought over it twice or so ... but I will try to think more times to be more certain.

[[3]] So let's first discuss the last post(#12). Denoting the functions ##f_i## in last post as ##F_i##.

If we are at a beginning of a region ##R_j## (at a ##p_j## or ##Q_j## that is), then is there an issue with comparing ##F_j## with all previous ##F_i##'s? I think there is. We can't rule out that (at beginning of region ##R_j##) the function ##F_j## is a (strict) restriction of some ##F_a## such that ##a<j##. So the program will not move its pointing variable any further than the beginning of the region ##R_j##.

The main problem here is that at beginning of region ##R_j## comparison with all previous ##F_i##'s (##i<j##) just leaves too many possibilities.

[[4]] So let's first discuss the second last post(#11). There is a certain conclusion that one can draw from the construction in it. That conclusion is that:
##F_i \neq F_{i+1}## (for all ##i \in Ord##)

So the above equation can be seen as a lemma of sorts. Obviously it isn't problematic at all since this kind of pattern can be avoided (very easily).

=========================================================

Now one can ask some questions regarding the above points. Is it true that we can create more complex impossibility patterns than one mentioned in point-(4). As far as I understand, the answer is yes. Another thing is that can we combine the ideas in last two posts to create a better construction. I think the answer is again yes.

One question is that can one create an "impossibility pattern" that is ruled-out (in the first place) via a cardinality argument. My first (and second) impression is that we should be able to. Very (very) roughly speaking we want a counter that has to increase (via cardinality) from ##\omega_X## to ##\omega_{X+1}##. But that increase [from ##\omega_X## to ##\omega_{X+1}##] itself is supposed to be the impossible pattern.

So obviously what I wrote should be taken as tentative as it should be checked well-beyond first few impressions. In particular, there are two different aspects ----- (1) the logic of argument (including the logic of several smaller lemmas) ----- (2) correctness/incorrectness of some base assumptions (this can be done by more knowledgeable people). For example, in the last topic I made, the argument I described/linked in detail had no issue with (1) but there was an issue with (2) [I wasn't aware that ##\alpha##-computable was defined to be "with parameters"].

=========================================================

Anyway, I will try to link documents that describe the idea in above part of the post ... uploading a first draft (very basic) in the two or three days and then editing/updating to more detailed drafts in next two or three weeks or so [this also removes the need to bump the thread just to update/improve some part in the argument]. In-case that the links don't work (or contain blank documents "after" next few days), then it means that I think there is some issue in the argument as a whole [doesn't necessarily mean that all the lemmas in it are incorrect etc.].

EDIT:
LINK1
LINK2
 
Last edited:

1. What is stabilization at countable levels?

Stabilization at countable levels refers to the process of maintaining a stable state or condition at a quantifiable or measurable level. This can apply to various systems, such as environmental, economic, or social systems.

2. Why is stabilization at countable levels important?

Stabilization at countable levels is important because it allows for better understanding and management of complex systems. By setting measurable goals and monitoring progress, we can ensure that these systems remain stable and sustainable.

3. How is stabilization at countable levels achieved?

Stabilization at countable levels can be achieved through various methods, including setting targets and indicators, implementing policies and regulations, and utilizing data and monitoring systems to track progress and make necessary adjustments.

4. What are some challenges in achieving stabilization at countable levels?

Some challenges in achieving stabilization at countable levels include limited resources and funding, conflicting interests and priorities, and the complexity of systems. Additionally, there may be uncertainties and unknown factors that can make it difficult to determine the most effective strategies for stabilization.

5. How can scientists contribute to stabilization at countable levels?

Scientists can contribute to stabilization at countable levels by conducting research, providing data and analysis, and collaborating with policymakers and stakeholders to develop evidence-based solutions. They can also help to raise awareness and educate the public about the importance of stabilizing systems at quantifiable levels.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
28
Views
5K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
2K
  • General Math
Replies
16
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
7
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
9
Views
2K
  • Advanced Physics Homework Help
Replies
6
Views
1K
  • Beyond the Standard Models
Replies
18
Views
3K
Replies
29
Views
6K
Back
Top