JesseM said:
Why not? Suppose you are looking at such a simulation spit out by the computer, and in it you see that a simulated being travels back in time and tells a second simulated being about some of his future actions. Do you agree that this second simulated being now knows what he's going to do in the future, and he does not have the power to avoid it?
From whose perspective are we looking now?
I agree that from my perspective
outside the simulation, as long as I do not interact with the simulation, then I can have "infallible foreknowledge" of what the beings will do, and they do just that.
However, from the perspective of someone
inside the simulation, I do not believe that infallible foreknowledge necessarily works, ie it can fail, because we now have the possibility of infinite self-referential loops. If the second simulated being is told what he is going to do, you are suggesting that, armed with this knowledge, he is then necessarily constrained to do it? I don't think so.
JesseM said:
Remember, this simulation was selected by the computer because it is self-consistent, so if A sees B having eggs for breakfast one day, then goes back in time and tells him about it, how could the history possibly be self-consistent if B didn't do so?
It couldn't be self-consistent in this case, we agree on that point - hence there must be something wrong with the assumptions. Where we differ is that you conclude from this that infallible foreknowledge is possible and it is the SCH hypothesis which ensures consistency; whereas I conclude from this that the assumption of infallible foreknowledge is at fault, and incorrect foreknowledge then ensures consistency.
JesseM said:
Are you suggesting that the thought-experiment involving the computer that generates a near-infinite number of possible histories, and then throws out all the ones that don't obey the laws of physics at every point in spacetime, is somehow logically impossible?
No, I'm suggesting the assumption of infallible foreknowledge is at fault.
JesseM said:
If not, then it seems you must agree that the output of this computer program would be only self-consistent histories, and that if the laws of physics allow backwards time travel, then some of these histories must feature time travelers telling other simulated beings what they are going to do in the future. And you can also see that the computer does not need any specialized rules to constrain the behavior of such beings, the fact that they must take the action they were told they would is just a consequence of the fact that the computer will only output histories that obey the laws of physics at every point in spacetime (and are thus completely self-consistent).
If you disagree with any of this, which part are you disagreeing with?
I disagree with the idea that someone can (come from the future and) tell me exactly what I will have for breakfast tomorrow, and (no matter what I do) I simply cannot prove him/her wrong. This seems absurd to me, but obviously not to you (you would presumably be happy with this notion since it is compatible with the SCH hypothesis).
One way to escape this absurdity is to suggest that the notion of infallible foreknowledge is faulty.
Let me provide a very simple model that shows the flaw in the assumption of infallible foreknowledge. It does not involve free will (whatever that is) and it does not even involve human choices; it is a purely mechanistic, deterministic algorithm.
Suppose we have a simple machine within our "life" simulation with one input and one output.
Let us also suppose that the input must be a single binary digit, either 0 or 1. Similarly the output must also be a single binary digit, either 0 or 1.
Let us suppose that the machine is hardwired such that when the input is 0 then the output is always 1, and when the input is 1 then the output is always 0, and the conversion from input to output happens instantaneously.
We also suppose that the machine is precise and infallible, and that it cannot "dither" or select an indeterminate output.
The final rule is : If someone tells the machine what it's output will be at a particular time, then it uses this supposed "prediction" as its input for that particular time.
We now have a (possibly infinite) self-referential loop.
However, the machine is perfectly deterministic. From outside the simulation, if I know the input then I also know the output. Hence I can predict what the machine will do as long as I do not interact with it.
But can anyone
inside the simulation predict
to the machine what the machine's output will be at a particular time? No, this is not possible, because no matter what we predict, the machine will use this as input and will output the opposite.
The SCH hypothesis would presumably say that the machine will break down, or will continuously switch infinitely fast between 0 and 1, or will produce an indeterminate output?
Whereas my hypothesis would simply say : The machines' output is just
not predictable from within the simulation, time travel or no time travel.
MF
