JesseM said:
No, I agreed that if the machine is 100% guaranteed to work properly (to give the opposite output as its input), then the only self-consistent escape would be for no time travelers to feed its ouput back as input. However, if the machine has even a small chance of malfunctioning and returning the same ouput as its input--say, an 0.000000000001% chance--then there will be at least some self-consistent histories in which a time traveler does feed its output back to it as input, and it malfunctions and returns an output identical to its input. Such histories may be much less likely than histories where a time traveler doesn't mess with it at all, but they are not nonexistent.
Agreed. I did make the initial assumption that the machine would not malfunction, but I agree with you that if there is a non-zero chance of malfunctioning then there would be at least some self-consistent histories within GR which involve a time-traveller predicting the machine's output. However I believe the overwhelming odds would be that such a prediction would not be possible (because the probability of the machine malfunctioning is so small), hence for each possible universe where such a prediction is made, there would be an overwhelming number of possible universes where such a prediction cannot be made. And I believe similar statistics would be true of the prediction of human actions.
JesseM said:
And human behavior is so complex that I doubt there is any situation in which you could say, without knowing the details of what was going on in a person's brain (and without having traveled in time), that they were 100% guaranteed to do anything. Imagine humans have spread throughout the galaxy, and there are 500 quadrillion humans alive on different planets. If I go to each one pretending to be a time traveler and say "I know you will have eggs tomorrow for breakfast", then each one is filmed having breakfast tomorrow and all the films showing people having something other than eggs are thrown out, are you suggesting we could be confident there wouldn't be a single film in which someone did decide to have eggs the next day?
Well firstly if you do go to each one and merely pretend to be a time traveller then you do not have infallible foreknowledge and it makes no difference whether they have eggs or not does it?
But even if you were a real time traveller, I am not saying this. What I am saying is that I believe there would be an overwhelming number of instances where people "could" manage to falsify your so-called infallible prediction, and hence by your own account the prediction is simply not possible and therefore cannot be made in the first place.
JesseM said:
Yes, that is one solution, and it may be that it's by far the most likely one, but there would be at least some self-consistent histories where a person was told what they were going to do by a time traveler, even if they were extremely rare.
I think we agree.
JesseM said:
I don't agree there is any reason they "simply cannot" have such knowledge, and you have given no real arguments for why they can't besides a vague feeling of free will (even though you know this doesn't apply to the computer simulation thought-experiment).
No, it's nothing to do with free will, and yes I have in fact given a very real argument supporting the idea that agents cannot exercise infallible foreknowledge. It’s to do with self-referential loops. We have seen that even a simple machine can negate the possibility of usefully using apparently infallible foreknowledge through the use of a very simple self-referential loop, and the same idea can be extended to humans and other agents.
JesseM said:
If it's complex, then even if it decides to try to act like that binary machine, it may get distracted, or change its mind, etc.
It "may" fail, but I believe in the overwhelmingly likely number of cases it will succeed.
JesseM said:
Well, surely you agree it is also possible for any sufficiently complex agent to decide to see what his friend eats on thursday and then go back in time and tell him on wednesday--yet your solution to the self-consistency problem was that no agent would ever choose to do so.
By your own argument - there must be instances where the agent CANNOT do as you suggest if we are to maintain self-consistency. All I am arguing is that these cases will be the overwhelmingly most likely cases. However cases where agents DO travel back in time and DO successfully predict another agent's future, whilst theoretically possible in principle (at least according to GR as it stands today), will necessarily be extremely rare.
JesseM said:
So why doesn't your argument "I cannot see why any sufficiently complex agent could not behave in this fashion" also apply here? And if you can see that self-consistency might imply that no agent would choose to give a friend foreknowledge in this way, why can't you also see that it might imply that no agent would ever choose to behave like your binary machine, or why no agent would ever choose to kill his mother before he's conceived?
I never said that it was not possible for an agent to choose
not to kill his mother. Of course an agent can choose
not to kill his mother, that is not the issue here. What I objected to was the implication that no agent could
ever kill his mother, even if he so chose.
JesseM said:
The second one is just another way of saying that any time-traveling agent is "constrained" not to choose to tell his friend what he'll eat the next day, so I don't see why you'd prefer the second to the first.
There is a big difference.
One solution (the self-consistent histories solution) implicitly assumes that both time-travel and infallible foreknowledge are possible, and that somehow additional “constraints” are placed on agents’ abilities in the present time to ensure consistency with both.
The other solution (the impossibility of infallible foreknowledge) does not assume time-travel, and does not place any additional constraints on agents’ abilities in the present time. In addition, it might turn out in the end that we find there is something wrong with GR as it stands, and time-travel is simply not possible (which would be consistent with the “impossibility of infallible foreknowledge” solution).
JesseM said:
Anyway, like I said, as long as it is possible for an agent to occasionally do exactly what he is predicted to do, then there should be at least some self-consistent histories in which he does receive such an infallible prediction.
Agreed, but this neither proves that time travel is possible, nor does it prove that such histories actually exist (only that they are possible in principle assuming GR as it stands).
JesseM said:
If we're getting into the domain of likelihood, it also seems unlikely that time travel would be widely available yet no time traveler would ever decide to make an infallible prediction, so perhaps the most likely type of self-consistent history would simply be one where sentient beings simply never invent time travel, even if it is permitted by the laws of physics.
This is closer to what I believe is the truth – but I would go even further and suggest the simplest solution of all is that in fact time travel is not permitted by the laws of physics, we just haven’t realized it yet.
MF
