Counterexample Required (Standard Notations)

In summary, the criterion states that for every program there exists a unique S-while program that runs for less than ωCK time.
  • #1
SSequence
553
95
I came up with the following qualification condition (for which I am asking for a counterexample).

Some Background:
A notation can be thought of as a mapping from an ordinal p∈ωCK to a subset of natural numbers.
Generally there can be a few variations it seems:
-- one can decide whether to use all elements of N or a proper sub-set of N in the mapping
-- one can decide whether two different numbers (say 7 and 11) can be assigned to the same element (say q∈ωCK) or whether two different numbers must always correspond to two different elements.

In what follows below I will assume that all elements of N are used in mapping and two different numbers always must correspond to two different elements (so each element is assigned a unique number).
I will denote the appropriate mapping function as address:p→N. So assuming p ≥ ω, the function will be a 1-1 correspondence.

It is well-known that many pathological examples of (recursive) ordering functions exist. One is explained in the following link in some detail:
https://mathoverflow.net/questions/...th-the-same-order-type-recursively-isomorphicRequired Notions:
(A) Programs
Now we consider the notions that are required to formulate the condition. We need the definition of an S-while program.
An S-while program has following commands (this is entirety of commands):
i) while(v!=w)
ii) End
iii) v=0
iv) v=v+1
v) v=u[t]

The "End" command is used in place of closing brackets. Whenever a while condition fails, the next command to be executed is the one right after the End command (corresponding to the while condition).

The run time of the program is less than ωCK. If the program runs for ωCK time or more it is considered to be non-terminating(looping).

All variables start from value 0. The commands in (iii) and (iv) have the usual meaning. Before discussing the meaning of command in (v) we need to describe the evaluation of variables on limits.

Consider some limit element p. To evaluate the value of a variable (say v) we divide into two sub-cases:
(a) There is some element q < p such that q was the last position at which the value of variable v was 0.
Let's denote the value of variable v at some arbitrary time t as Value(t). Then define a set S as:
S={ Value(t) | q < t < p }
We define Value(p) as equal to supremum (least upper bound) of the set S.

(b) There is no "last" position (before p) at which the value of variable v is 0. In this case set Value(p) equal to 0.

Now let's discuss the command (v). The meaning of command can be thought of as follows (it has 3 variables v, u and t):
Set the value of variable v equal to 0 immediately on the next step. Then increase it constantly in increment of 1 until it reaches the value equal to x where:
x is the value of variable u at time t (in running of the program).

So this command takes more than unit time to execute. More specifically the time it takes to execute this command depends on value of variable u (at time t).

Finally coming to line numbers, the line numbers start from 0 (the first line) up till n-1 where n is length of program. The line number n can be reserved for the point when the program halts.
For evaluating the line number on a limit, we take the line number on a limit as the smallest "reasonable" value.
This can be defined more formally. Suppose p is the limit value at which we want to evaluate the line number. Suppose State(t) denotes the line number at time t. Define a predicate P:ω→{0,1} as:
P(s) ≡ for all q < p, there exists a t greater than q such that [ State(t)=s ]
Next define a set S as:
S={ s∈ω | P(s) is true }
We define the State(p) to be the smallest value contained in set S.

(B) Representation
Our specific concern will be with those programs that halt. So note that in that case the function describing value of some variable at some time can be described as Value:p→p. Similarly the function describing the line number can be described as State:p→ω.

Now suppose for some element p we are given the address function address:p→N. For some function F:p→p we can define (explicit) representation function of F as f:N→N and defined by:
f( address(x) ) = address( F(x) )

Note that because F(x) ≤ p (for all x in the domain) and because F is total (for our case) we don't need to worry about any extra cases here. So to keep it as brief as possible we move on.

(C) Freezing
Now finally before stating the criterion, we add one last condition to our programs. Our programs are "freezing". In the sense that the "complete snapshot" of the program (that is both the values of all variables and line numbers) can't repeat itself ever.

Suppose at some point q it occurs (repetition of "complete snapshot"). Then for all time t > q we assume that the program remains in the same line number (with the same variable values). Hence it is considered to be a non-terminating program.

Timing Criterion:
A computable less-than relation (ordering function) for a given element p∈ωCK (say non-limit) represents a standard notation only if:
There exists a freezing S-while program that runs exactly for time p and the representation functions for its line number and all its variables are recursive.Just in case there is some trivial issue (with this specific statement of the qualification condition), note that the criterion can also be stated in terms of a finite number of variables whose values must increase or decrease in a specific manner (only increment by 1 and zeroing). The condition corresponding to freezing would be that at no two different times (except initially at position 0 or 1) the values of all variables can be equal. The effects of commands in this case can be thought of as instantaneous (commands can't be placed at position 0 or 1). If on a limit (say p) a value of variable also evaluates to p it is turned to 0 by force (at p).

=====

What I am asking for is a counterexample to the condition above. I have tried hard looking for it but haven't been able to come up with one. Perhaps maybe I am not thinking in the right direction required for the counterexample.
In some sense I suppose I might be relieved seeing a counterexample (in other sense little disappointed perhaps). For now, I just want to stress on this possibility (and not on the possibility of lack of counterexamples).

A counterexample can be given in two ways here:
(1) Since the condition is supposed to be general, it would be sufficient to demonstrate an element p∈ωCK where the required condition simply can't be satisfied.
I am reasonably convinced this isn't true (for certain reasons) and have a sketch in mind. But since I haven't formally worked it out (both due to lack of complete filling in of details and lack of requisite knowledge) I suppose I might include it.

(2) Demonstrate a pathology directly (showing that it demonstrably satisfies the criterion). This would be enough.
As is usually the case, this will almost certainly also demonstrate that two different notations for an element p exist (both satisfying the given criterion) for which the mapping between them fails to be recursive. One notation will be standard and the other one will be a pathology.
 
Last edited:
Physics news on Phys.org
  • #2
SSequence said:
A computable less-than relation (ordering function) for a given element p∈ωCK (say non-limit) represents a standard notation only if:
There exists a freezing S-while program that runs exactly for time p and the representation functions for its line number and all its variables are recursive.
Just a small point to mention regarding statement of the criterion. We can simply say "runs for time greater than or equal to p". That would also mean that p could be limit too. We would be talking about the corresponding representation functions described in p then (not on the total run time -- which might be greater than p). The program has to halt though.EDIT: Or maybe well forget about this. The original statement is also fine (and requires minimal number of notions/assumptions). The modification mentioned above then can be considered as adding more cases/assumptions etc. I am not removing it as it may create unnecessary confusion.

The main point is that, in any case, IF the notion is defined for some element p THEN it is also defined for all elements less than p anyway.
 
Last edited:
  • #3
I posted this question on overflow (I hope it won't be closed just based on lack of formatting).

Just in case anyone is interested in following the question.
https://mathoverflow.net/questions/275786/counterexample-required-standard-notations

Also I have a somewhat different question regarding using of TeX etc. If I use certain formatted text I can write in a TeX-like form in this forum. So if I learn the proper formatting (for writing in TeX like form), I can simply use it in overflow too? (or there might be issues with standardization?)
 
Last edited:
  • #4
It does not seem that the question has been well-received on MO. In future, I suppose after enough time has passed, I suppose I might post this question on SE (in case the question I posted on MO gets closed).

Obviously there are lot of highly knowledgeable people there, even specifically to the topic about which I posted. So I can't really complain.

But I think I should mention a few reasons now for posting this question. It is possible that generating an example that refutes the qualification condition mentioned in OP is relatively easy. However, it is also possible that it might be quite difficult. Obviously this is what I wanted to inquire by posting this question.

=========

It is a known problem whether there should be notations for some element p∈ωCK that can be considered to be intrinsic or natural. The opinion (of experts) on this matter seems to be divided. That's perhaps partly because the question is strictly not a mathematical one (and requires at least some pre-mathematical notion).
There do seem to be a few research programs in this regards but they are well beyond my scope and it would be better for me not to comment on that at all.

Regardless, one of the suggestions I came up with while working on somewhat related notion was that of an existence of recursive map between two notations. It seems that this was also suggested by the logician Georg Kreisel a long time ago (but I will admit I don't know any details at all ... the context there might have been somewhat different).
At any rate, for the notion that I had formed it seemed that both of the following conditions must hold:
i) Existence of an intrinsic notation for all p∈ωCK.
ii) A recursive mapping being a necessary property of the intrinsic notations.

=========

So essentially what I have stated as criterion above can be thought of as "hypothesis" in a strict sense. I feel that the word "criterion" is suitable here though (because it is a certain qualification to rule out pathologies and also an informal part associated with it along with a formal part). Essentially why I wanted to know this hypothesis had a relatively simple counterexample or not was indeed because, assuming it to be correct, one can justify further notions.

Here it is fairly important to describe two simple properties of a lot of criterions that fail. Usually it is sufficient to deduce a criterion "must" fail if it satisfies one of the following two properties. Obviously it might also be possible that a criterion fails but doesn't satisfy these properties either. I also think that it would probably be common knowledge for logicians (who might have thought a bit on this issue). Of course if the reader is interested, I might also describe the source for few specific comments in this regard.
(1) Diagonalisation
The use of word "diagonalisation" here is in a somewhat informal/rough sense. Basically the idea is we can usually describe functions in an ad-hoc manner that guarantee that notations up till a certain level remain very well-behaved. An example describes the ideas better here.
A very reasonable property for ω2 is to require that any "reasonable" notation for ω2 must have the representation of the following two functions as recursive:
F(x) = x+1
G(x) = x+ω
In fact, for any two notations for ω2 that satisfy we can easily prove a recursive map between them.

But the issue with such a definition is that it is very specific. For ω3 one can easily generate a pathology that doesn't satisfy the condition above (the condition of recursive map will also fail). Within that notation actually, the representation of the following function will fail to be recursive:
H(x) = x+ω2

One might think that this can be "fixed" by using more sophisticated functions. In fact, it seems that if we take a sophisticated enough function we can also define intrinsic notation for element up till ε0 (I have forgotten the specific function that I had thought for this). In fact, assuming the representation of that function to be recursive, one could show a recursive mapping for all notations of ε0 satisfying that property. And actually then, the notation of Cantor can also shown to be intrinsic (after suitable/reasonable conversion of terms into numbers).

Once again it seems (as I re-call) the problem would be that for εω one could make that condition fail (so recursive map would fail to exist between two notations for εω that satisfy the condition).
So, in general, using specific functions and requiring their representation to be recursive only works up till a specific point (and can never be used as a general criterion).

(2) Uniform Mechanisation
Defining specific functions and requiring their representation to be recursive isn't the only way to describe a criterion. But, in general, whenever there is a "uniform" way to convert an arbitrary computable less-than function (that may not satisfy the criterion) for some element p∈ωCK into one that satisfies the criterion (for some q ≥ p), it is almost a sure sign that criterion has to fail.

The property-(1) mentioned above can in some sense be seen as a specific case of this property. As an example given any arbitrary computable less-than function for p∈ωCK we can write a simple program which returns the less-than function of some element q ≥ p and in which the representation for successor function is recursive.

==========

To summarise, I think one might also say that this whole discussion is too opinionated or subjective. Perhaps searching for a definition of natural well-ordering or natural notations using the idea of a recursive mapping is simply not meant to succeed.

I have thought of another criterion which is structurally entirely different from the one in OP. One different thing about it is that it seems (if the criterion works as expected) one can prove for specific elements the required property of recursive mapping. That criterion only uses only "the" most basic kind of functions. So if one can prove it for a reasonably large element (perhaps ωω or perhaps a little higher) it makes it very likely to be correct in general.
However, admittedly that is a little premature (even though the first signs seem quite convincing ... but once again it is better to be cautious) as I haven't worked out proofs for specific elements either yet.

But indeed if someone describes a direct counterexample to criterion in OP and furthermore for the one mentioned in previous paragraph if I find a troubling case (whose first point should definitely exist fairly low if it exists) at a small point ... I guess then I will have to concede that this particular way of thinking about standard/canonical notations is not meant to work in general (or perhaps that at least that I didn't think of the right definition).
 
Last edited:
  • #5
SSequence said:
But I think I should mention a few reasons now for posting this question.
Speaking only for myself, when a complicated question is asked, I ask why I should really care about the answer. Apparently, the technical concept of notation ( https://en.wikipedia.org/wiki/Ordinal_notation ) can be related to Godel numbering. The concepts of Godel numbering and Godel's incompleteness theorem excite people's interest. If you can explain a direct relation between your questions and "popular" topics such as incompleteness or computability, experts in those fields (which would not include me) might be stimulated to respond.
 
  • #6
Well this might be a good idea for a general or even a semi-technical discussion. But when asking a specific question to experts, I am not sure this is the best idea (specifically for a site which is quite strict/restricted in its template).

Indeed the kind of style you are suggesting is probably suited for this site.
 
  • #7
Stephen Tashi said:
Speaking only for myself, when a complicated question is asked, I ask why I should really care about the answer. Apparently, the technical concept of notation ( https://en.wikipedia.org/wiki/Ordinal_notation ) can be related to Godel numbering. The concepts of Godel numbering and Godel's incompleteness theorem excite people's interest. If you can explain a direct relation between your questions and "popular" topics such as incompleteness or computability, experts in those fields (which would not include me) might be stimulated to respond.
Well I thought I might respond since at least one of the motivating reasons does relate to it. One interesting point is that Godel himself thought that if one could solve this problem then one could possibly dissolve unsolvability entirely (over some domains at least). Though I have the feeling that very few people would share this view.

These days I think the optimism about this would be very low though (not to say "very very low"). Perhaps optimism about this has decreased over time (maybe rightly so) and wasn't always that low. I have no idea.

What is the biggest obstacle for any such construction? There are two parts to it:
(1) You have to give a reasonable way to assign sequence (in a deterministic way) for the elements all the way up till ωCK. Here "reasonable" could perhaps be taken to mean that at some smaller point(<ωCK) the process shouldn't becomes unsolvable. I think that would basically count as cyclical and defeat the purpose.
(2) A number of auxiliary properties that must be satisfied by the construction. Specifically there are three properties I have in mind. But I don't discuss them (as becomes clear by following discussion). If "all three" conditions are true for some specification, a diagonalisation follows.

Forgetting about (2) though, the process is really kind of stuck on step (1). I don't think anyone really knows how one can specify the description/process for step (1).

Over few years I have thought of many specifications (indirect ones, directly specifying a notation for specific element and describing sequences doesn't count at all ... because it won't work by definition) ... mostly out of fun/experimentation (and just for step(1) really ... I didn't think about step(2) until much later) . And usually it is clear after few days or few weeks, with little bit of experimentation sometimes, that the given specification is just not supposed to work (meaning that it would surely "stumble" at some point). And honestly, I stopped thinking about it completely couple of years ago.

Somewhat recently though, I have thought about making a modification (basically adding a condition) to the second criterion I mentioned in previous post. The resulting process it is hard to say beforehand why it would fail step(1) (meaning why it should stumble at some point) and, if in case it doesn't, why it should fail some parts of step(2) at least. It seems difficult enough (for me at least) to assert from outset one way or the other (probably some further significant experimentation might tilt my opinion in one of the directions).
But nevertheless, I do understand (specifically after some previous experience) that it really is against the odds (even that is an understatement), so I tend to keep my enthusiasm restrained in any case.

This is a bit compounded by the point that it seems fairly difficult to analyze the second criterion (in general sense). Well, who knows, maybe that indicates a fault anyway.

I should probably focus more on increasing my basic knowledge though anyway (which is really lacking far behind in any case).

Edit:
Sorry if this thread is beginning to look like a self-blog. I won't be posting any further (of course unless someone else is interested in discussion) unless either someone derives a significant (probably partial) positive result (for question in OP) or finds a directly negative counterexample (which settles the specific question in OP). Well I hope at least that people found this post to be interesting.
 
Last edited:
  • #8
Well I think I have figured out the issue here (for the question in OP). Not going to discuss it in detail (as perhaps people aren't going to find it interesting enough).

As I mentioned before, that just leaves one (promising) idea as far as I am concerned (not discussed here). If that doesn't work then I am out of ideas as far as this problem is concerned.
 
  • #9
I posted essentially the "corrected" version of question in OP (the question is more limited in scope):
https://mathoverflow.net/questions/279199/finite-number-of-registers-and-computable-well-orderings

@Deedlit
Since you seem to be quite knowledgeable in these kind of topics, could you take a look at this question (when and if you have enough time) and describe what is making this question so confusing or difficult to communicate? Can I make a change or add some explanation that would make the question is easier to understand? Did you understand the "problem statement" after reading the question?

Just read the first part of question (not necessarily the second one) and the explanation in "EDIT:" I gave towards the end of question (or possibly the comments too). I am honestly somewhat disappointed at not being able to communicate what I thought should have been a fairly easy "problem statement" to communicate (though the problem statement is not short).
And perhaps more so because a lot of other ideas I have, some of the terminology is repeated heavily (to be very particular "representation of functions" and the "address" function). Are there standard terms for these particular notions that I should be using (and what exactly are they called)?

======

What seems like to me apparently is that the general expectation is that I am asking whether certain specific functions (say f(x)=x+ω, f(x)=xω, f(x)=x+1 etc.) can be made non-recursive in some (computable) well-ordering (or perhaps some combination of recursive and non-recursive for different functions). But what I am asking is somewhat different though.
 
  • #10
I believe I understand the first question, but I can see perhaps why the question has not received that much attention. First, very long questions and answers tend to be skipped over by a lot of people; I tend to write long answers on MSE, and I typically don't get that many upvotes; my short answers fare much better. Also, using computer science terms like "register" and "address" may scare some pure mathematicians away.
 
  • #11
Okay, I believe the answer is no, there is no negative example to your first question. At least one of the value functions will have arbitrarily large values, and we can use that value function to construct the successor function. Just go through the values of the unbounded value function one at a time, ignoring repeats; this will recreate the address function, and then we construct the successor function from there.
 
  • Like
Likes SSequence
  • #12
Thanks for taking a look at the question.

Yes, I get the feeling (with overflow in particular) that a lot of people are trying to optimize their research or learning time. Possibly sometimes to the extent that a usual person wouldn't find understandable. So there is an issue with longer questions there (and adding to that some use of amateur or amateur-looking terminology).

That's why I thought about skipping the second part initially (but the question feels too random without it). Anyway, it is certainly good to know that question is understandable enough --- though I wouldn't be surprised if it still had few mistakes/odd-phrases with regards to elementary set theory terminology.

Also as far as the second part of the question is concerned, it isn't very different from the first part (except that one constraint is added). In fact I think in some ways the first part alone separately might be more confusing in a sense (as you would have noticed with constant interchange of ℕ and ω). However, I thought it would serve as a necessary or easier test case (both with regards to a negative-example or a positive-argument).

==========

I just have a simple question w.r.t. terminology. Suppose we have a well-ordering on ℕ with order-type α. What is the rigorous (in language of elementary set theory) way of describing the functions (just in few sentences) that I wrote as "position" and "address".

Edit:
I noticed your answer. I am a little tired right now. I will try to look at it thoroughly and try to comment (also I am very slow at understanding things so it might be some time).
 
  • #13
You could say:

Given a computable well-ordering ##\preceq## on ##\mathbb{N}## of order type ##\alpha##, let ##address## be the unique order-preserving bijection from ##\alpha## to ##(\mathbb{N},\preceq)##, and let ##position## be its inverse function.
 
  • Like
Likes SSequence
  • #14
I am writing some of the problems I am having here, instead of tweet like comments on there (which are much harder to work with).

I think your estimate regarding lower bounds in for question(2A) is most likely going to be correct (even though I haven't seen it enough detail ... I think similar lower bounds will be expected ... and one also guesses the use of induction in formalising an argument like that).

But I am having some trouble following your logic for first question. Let's just consider two registers. Basically you are saying that one of the Value functions has to go unbounded (say it is Value1). That's definitely correct. But when you talk about being able to recover representation of successor function from the function V1(representation of Value1), I don't quite get it.

I get the idea of removing repeats (which seems neat), but I am kind of lost after that. I have trouble using exact words so I will use examples to explain where I am having difficulty:

(1) First Example
Let's consider the case of one register. We have a visual like:

A(0) --- A(1) --- A(2) --- A(3) --- A(4) --- A(5) --- A(6) --- ...
A(0) --- A(0) --- A(1) --- A(2) --- A(3) --- A(4) --- A(5) --- ...

We have:
V1( A(0) ) = A(0)
V1( A(1) ) = A(0)
V1( A(2) ) = A(1)
V1( A(3) ) = A(2)
V1( A(4) ) = A(3)
... and so on

Indeed we can recover the representation of successor function from the function V1. So suppose we already know A(0), A(1). To find A(2) all we have to do is run the function V1 on various inputs until it there is an input X for which we have:
V1( X )=A(1)
To find A(3) now all we have to do is run the function V1 on various inputs until it there is an input Y for which we have:
V1( Y )=A(2)
and so on...

But this logic doesn't seem to carry over (this is where I am having trouble understanding your argument).

(2) Second Example
Let's suppose now we have two registers. Considering the first register:

A(0) -- A(1) -- A(2) -- A(3) -- A(4) -- A(5) -- A(6) -- A(7) -- A(8) -- A(9) -- A(10) -- A(11) -- A(12) -- A(13) -- A(14)
A(0) -- A(0) -- A(1) -- A(2) -- A(3) -- A(0) -- A(1) -- A(0) -- A(1) -- A(0) -- A(1) -- A(2) -- A(3) -- A(4) -- A(5)

I do not understand how the same logic carries over. For example suppose once again we know A(0), A(1) already.
But now to find A(2) if we run the function V1 on various inputs until it there is an input X for which we have:
V1( X )=A(1)
problem is that there are too many values satisfying this equality. So what value we get we don't know. For example, above we have:
V1( A(2) )=A(1)
V1( A(10) )=A(1)

Maybe I have made a mistake somewhere, I don't know.

==================================

Furthermore there is one other aspect (only discussing the first part of question). Suppose you remove the third constraint (repetition of all register values at once). And now suppose we just have one register.

There exists a negative example of a recursive well-ordering for ω where the function Value1 isn't bounded (by a finite ordinal) and yet the representation of successor function isn't recursive. Furthermore the representation function of Value1 is also recursive and the first two constraints for the register are also satisfied.

The full description of that well-order relation might be slightly long (not that much though). But basically the idea is to place even numbers on BB like positions. So we have:
$$address(\,\sum_{i=0}^n BB(i)\,) = 2n \qquad for \; all \; n \ge 0$$
All the odd numbers are placed in between. Furthermore if we denote the representation function of Value1 as V1, we have in that well-ordering:
V1(4) = 2
V1(6) = 4
V1(8) = 6
and so on...
 
Last edited:
  • #15
Yes, you are right; I am mistaken.
 
  • #16
There were two purposes for asking the question (in post#9): (1) It was reasonably interesting question to ask in its own right (2) I thought perhaps a more specific or limited case of this kind may give some useful information about the general case (but not necessarily).

Now after finding issue with "timing" in post#1, I have been thinking (in last few weeks) a little bit about whether we can have certain reasonable criterions with just programs? I can think of about two or three. I am not quite clear on whether there is an easy negative example in each case.

==========

With the original question in post#1 the basic issue is relatively simple to describe. If one removes the "freezing condition" under "(C) Freezing" there is a trivial issue at ω. However, even one includes that condition the issue that occurs is relatively simple to describe (and without writing all the details, it seems quite likely to me that this or something similar will occur). Suppose we had 4 variables(exact no. of variables is unimportant) and we write the "trace vector" as(first variable describes state):
(State, Value1, Value2, Value3, Value4)

But now it seems that we can build the situation for a program such that at two "different" instances of time t1 and t2 we have the description of the whole trace as:
At t1: (a, t1, 0, 0, 0)
At t2: (a, t2, 0, 0, 0)
But when infinite number of instances of this kind will gather, this would give rise to an easy negative example. This kind of obvious issue doesn't happen with question in post#9 (but the question is also limited).

==========

However, I formed this particular idea (in post#1) originally as a reverse justification of sorts for another idea (and not as an independent idea). It seems to me now that there are about two or three other reverse justifications (in a fairly natural way) that one can make, which I describe next.

If it becomes fairly clear that there is a negative example (for each of the "reverse justifications") then there isn't any need for an analysis because there is nothing to show. Assuming absence of negative examples, unfortunately any kind of analysis just doesn't seem transparent at all (even for any specific ordinals). But nevertheless, I feel I should describe it.

==================

First I will give the description of what I call "trace-recursive" functions. They also happen to provide some context. First take the descriptions of "(A) Programs" exactly as in post#1. There is no need to add the freezing condition of (C) it seems. The only thing we add now is an input variable x and an ouput variable y so the program can be thought of computing a function f:ωCK→ωCK. Of course none of it is anything new.

Now we have to describe a few terms for the definition of a trace-recursive function. We need a function T:ωCK→ωCK to describe how long a program runs for a given input. We will assume that T(x) < ωCK for all x. Furthermore, unless otherwise mentioned, we assume: $$\sum_{i=0}^{i<\omega{_C}{_K}} T(i) = \omega{_C}{_K}$$

First define a function Lsub:(ωCK)2→ωCK so that we have:
$$Lsub(a,b)=0 \qquad \; if \; a>b$$
$$Lsub(a,b)=\{\,y|\,a+y=b\} \qquad if\; a \le b $$
Now Value(t,x) gives the value of some variable, when the time is equal to t and the input given to the program is x. We want to define a function SeqValue:ωCK→ωCK which intuitively places the value maps for computations next to each other (in order of increasing inputs).

Here is how we define it formally (there may be a mistake as I wrote it little over an year ago and I haven't re-checked). We want to find SeqValue(x). Suppose b is the smallest value of n for which:
$$\sum_{i=0}^{i\lt n} T(i) \ge x$$
Now we define:
$$c=\sum_{i=0}^{i\lt Pre(b)} T(i) $$

where Pre is the predecessor function. And now we have:

$$SeqValue(x)=Value(Lsub(c,x),Pre(b))$$
Exactly the same can be done for all the other Value functions and the State function.

Now we say that functions such as SeqState:ωCK→ωCK or SeqValue:ωCK→ωCK are "step-recursive" only if for arbitrarily large elements of ωCK there exists at least some recursive well-ordering, so that corresponding representation functions are recursive.

So a function f:ωCK→ωCK is called trace recursive when there at least exists one S-while program that computes f and the corresponding functions SeqState:ωCK→ωCKand SeqValueiCK→ωCK are all "step-recursive".

Originally my intention was to define a natural notion for functions f:ωCK→ωCK, but I was unaware at that time that all notations corresponding to recursive well-order relations aren't supposed to be well-behaved.

But regardless of this, it seems one can prove some reasonably interesting results regarding trace-recursive functions, which are fairly easy (and I have skipped them here entirely to keep focus on the questions). Now we have the following questions regarding trace-recursive functions:
(1) Is the notion of trace-recursive functions reasonably independent of computational model.
(2) Can we find out some useful methods to prove that a given function is not trace-recursive.
(3) Is there an example of clearly unnatural function that is trace-recursive? I don't know of any till now. But of course there might be that isn't particularly hard to find.
(4) (trace criterion) Consider all trace recursive functions that have the same form as the "address" function. That is suppose we have a trace recursive function F:ωCK→ωCK of the form:
2 ≤ F(x) < ω for all x<α
F(x) = 1 for all x≥α

Furthermore for any two β1 and β2 (not equal to each other) we must have:
F(β1) ≠ F(β2)
and furthermore for every natural number n ≥ 2 there must be some β<α so that:
F(β) = n

Now consider two functions F and G (for an arbitrary ordinal) both of which have the same form as the "address" function. Define the mapping function P:ℕ→ℕ between F and G in the usual way:
P(F(x)-1)=G(x)-1 for all x<α
Can the function P be made non-recursive (negative example). I think of this as being related to point(3).

(5)
Does there exist a function F:ω→ωCK computable from an S-while program (with input restricted to natural numbers) such that:
$$\sum_{i=0}^{i<\omega} T(i) = \omega{_C}{_K}$$
and furthermore the functions SeqState and SeqValue (with appropriate restrictions) are step-recursive. This seems to be possibly a quite difficult problem that can be stated independently of the previous questions.

===================

And "finally" furthermore one can think of what we can call "nested timing criterion". Unlike "timing" where there is relatively easy to see a trouble (as I described in beginning of the post), here I am not quite clear. The point with "nested timing" is to have a whole series of program (at once) for "all" values less than α such that for every given value β<α there is a program that runs for Pre(β) time (representation of trace of "each" program will be recursive and furthermore the index corresponding to each program will be calculable --- all in same well-ordering). It shouldn't be difficult to make this formal (just some representation functions), though I skip it here.

This pretty much concludes "program based criterions". I can't think of much else using "programs".
 
Last edited:
  • #17
I was too tired after writing the long post yesterday so I couldn't take a look at it again and edit.

Just a brief mention of few issues. I should admit that I am a little confused by the formal definition of SeqValue and SeqState that I wrote myself (as I mentioned it was some time ago). In case there is some problem with formal definition here is an informal description.

The functions SeqValue (the number of such functions will be equal to number of variables in the program) and SeqState are functions just taking one argument. On the other hand, the functions Value and State are functions that take two arguments. As I mentioned Value(t,x) means the value of some variable at time t when the input given is x (same meaning for State(t,x)). So we have:
Value(0,i)=0 for all i (except if we are talking about the Value function of input variable)
Let's say the Value function for input variable is numbered 1. Then we have:
Value1(0,i)=i for all i

Furthermore I mentioned that T(i) is the time that a program runs before it halts. So if we index the line numbers for the program starting from 0 to n-1, we have:
State(0,i)=0
State(T(i),i)=n for all i

The idea with sequential value function is just to place the value maps for various computations adjacent to each other (in order of increasing inputs). So suppose we have a program which never alters the input variable directly. And suppose we had at some time t:
SeqState(t)=n (indicating the program has halted for input α)
SeqValue(t)=α
Then for time t+1 we must have:
SeqState(t+1)=0 (back to first line)
SeqValue1(t+1)=α+1 (input variable value set to α+1)

For the other variables (index 2 or greater):
SeqValue(t+1)=0 (all other variables start with value 0 initially)

All the description sounds a little complicated perhaps but visually it is easier to understand. I think one can do without the function SeqState and SeqValue and just use functions State and Value directly (we would just have to define representation of functions that take two arguments). Though that's just a matter of definition.

==========

There is also a minor error in statement of question number (4).

Other than that with regards to mention of "nested timing" in last paragraph (of previous post) I think one will have to encode the whole whole "trace vector" as a single number for each program (when describing it fully formally).

Also perhaps a little description w.r.t. "trace-recursive" functions. Note that almost all functions that we normally think of fall within the set of "trace-recursive" functions. However, the importance(or lack of it) with a given set of functions here lies with the following two points:
(1) The class of functions must not be too narrow. That is there are obvious functions that should be included but aren't.
(2) The class of function shouldn't perhaps be too wide.

It is easy to think of set of functions that satisfy (2) but fail (1) (for example, just include a few very reasonable functions). It is also easy to think of set of functions that satisfy (1) but fail (2) (for example, say the set of all functions calculated by an S-while). However, satisfying both simultaneously seems to be difficult (so the question(3) in previous post is a somewhat informal question in that regard).
 
Last edited:

What is a counterexample in standard notation?

A counterexample in standard notation is a specific example or case that disproves a given statement or theory. It is used to show that the statement is not always true and can be applied to any mathematical or scientific concept that is represented using standard notation.

Why is a counterexample required in scientific research?

A counterexample is required in scientific research to ensure the validity and accuracy of a given theory or hypothesis. It helps to identify any flaws or limitations in the theory and allows for further refinement and improvement of scientific knowledge.

How do you find a counterexample in standard notation?

To find a counterexample in standard notation, you need to understand the statement or theory being tested and its corresponding notation. Then, you can come up with a specific example that goes against the statement and proves it to be false.

Can a counterexample be proven wrong?

No, a counterexample cannot be proven wrong. It is used to disprove a statement or theory, but it does not necessarily mean that the statement is always wrong. The counterexample simply shows that the statement is not always true and may have exceptions.

What is the significance of standard notation in counterexamples?

Standard notation is used in counterexamples as it provides a universal way to represent mathematical and scientific concepts. This allows for clear and concise communication of ideas, making it easier to identify counterexamples and further advance scientific knowledge.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
935
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
692
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
489
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
Back
Top