- #1
Jarvis323
- 1,243
- 986
I watched the movie Time-Lapse recently, which took an interesting approach to time travel. Instead of people/things traveling back in time, information is sent back in time (in the form of photographs). Like conventional time-travel, it still creates paradoxes. For example, if you change the future based on your knowledge of the future, then your knowledge of the future would have been wrong since the future had been changed. In the movie, if they deviate from the future so that the photo from the future isn't taken, then they would never have had the photo in the first place. So you're left with some options: everything magically changes to adjust (like Back to The Future), reality splits into multiple separate timelines, or free will doesn't exist and everything works out in some really coincidental way.
The idea/twist I thought of is that the information coming from the future isn't actually from the future, it's from an ultra accurate (borderline perfect) prediction of the future. Of course this means it's not time travel at all anymore. However, one interesting thing is that, supposing you could predict the future perfectly, you would be able to change the future based on the prediction and prevent the prediction from coming true (just the same as in Time-Lapse). So predicting the future is equivalent to time travel of information, and it comes with the same paradoxes.
I'm sure this has been explored before, but one thing I found interesting is the relationship to the Halting problem, and Gödel's Incompleteness Theorem, both of which arise from problems with self-referential dependencies (although not exclusively). Another paradox you can examine is the surprise exam paradox, which is invoked as soon as you predict that some event will come as a surprise before a specific day. All of these could be integrated into the story as food for thought. It's interesting because it relates time travel paradoxes nicely with theory in mathematics and computation.
Anyway, where it gets interesting to me is when I try to come up with a way to just make it work (predict a future event perfectly). You have to suppose the universe is deterministic first I guess. What we know about simulation is that a system can simulate itself, theoretically, but it will necessarily lag behind reality itself so it's no use for prediction. However, you could try something like predicting extremely trivial events of a mostly isolated sub-system using the resources of a separate and much larger sub-system. For example, you could make a quantum Matrioshka brain, which is an entire solar system converted into one big quantum computer (lets just assume that the computational system was able to acquire the initial conditions and is able to communicate fast enough). So you have enough computational power to simulate every single atom, photon, and electron, and everything else in the entire world (hypothetically directly from perfect first principles). You use it to predict if pizza will be served next week. But whatever answer you get, you can still change the outcome simply because you knew in advance what it was predicted to be. And this is true even though the computer is simulating your brain itself perfectly. The problem is that the information delivered from your super computer, even though it's only streaming back a single yes/no answer (and suppose that is the only outside influence), is enough to make the prediction fail. The implication is that what you will choose to do with the information (generally) cannot be predicted (because your goal might be to contradict the prediction itself). So whether you'll have pizza next week might be undecidable (just like the Halting problem). And the real reason, is (in some cases) whether you'll have pizza depends on whether it is predicted that you'll have pizza, which means the predictor would need to simulate itself as a sub-problem, which it is unable to do.
Anyway, so here is an example of the way this could be used in a story: There is a group which has access to this ultra powerful computer like described, and it has predicted that someone is going to do something they want to stop in the future. They know specifically how, when, and where it is supposed to happen, and they have basically one known chance at stopping it. Now the group has to be really careful not to do anything to change the prediction from coming true up to that one moment where they plan their attack or the simulation will need to be recomputed (which might be unfeasible). The simulation should be capable of perfectly simulating the whole world, with the exception that, if they act on any information provided returned to them by the simulation, there is chance it will break (because it would have to have been a self referential prediction, which it would fail at). They might get lucky though, if what they do based on the information doesn't change the important part of the simulation result. The most sensitive part of the system that might mess things up, is what the target experiences. They so much look at the target differently based on the simulation result, it could ruin everything. So there is a delicate battle that boils down to keeping the simulation itself out of the light cone of this critical event.
There is some additional stuff to use to make it confusing. Like, they know the simulation halted. But if it had simulated itself, it would not have. So they know it didn't truly simulate itself. But if it didn't simulate itself, then there are two possibilities (1) the real world series of events didn't depend on its prediction and it was able to predict that, or (2) the machine predicts that the events will depend on it's prediction, but it knows that it cannot simulate itself as a sub-problem in time, however, it realizes that if it deceives the recipient of its result, it will be able to successfully predict the complete future of the planet. So the machine is actually not only predicting the future, but deliberately manipulating the future deceptively (because that is the only possible way it is able to complete its task of predicting the future). The problem is that it can only do this if it keeps the true result secret until after it actually happens.
The idea/twist I thought of is that the information coming from the future isn't actually from the future, it's from an ultra accurate (borderline perfect) prediction of the future. Of course this means it's not time travel at all anymore. However, one interesting thing is that, supposing you could predict the future perfectly, you would be able to change the future based on the prediction and prevent the prediction from coming true (just the same as in Time-Lapse). So predicting the future is equivalent to time travel of information, and it comes with the same paradoxes.
I'm sure this has been explored before, but one thing I found interesting is the relationship to the Halting problem, and Gödel's Incompleteness Theorem, both of which arise from problems with self-referential dependencies (although not exclusively). Another paradox you can examine is the surprise exam paradox, which is invoked as soon as you predict that some event will come as a surprise before a specific day. All of these could be integrated into the story as food for thought. It's interesting because it relates time travel paradoxes nicely with theory in mathematics and computation.
Anyway, where it gets interesting to me is when I try to come up with a way to just make it work (predict a future event perfectly). You have to suppose the universe is deterministic first I guess. What we know about simulation is that a system can simulate itself, theoretically, but it will necessarily lag behind reality itself so it's no use for prediction. However, you could try something like predicting extremely trivial events of a mostly isolated sub-system using the resources of a separate and much larger sub-system. For example, you could make a quantum Matrioshka brain, which is an entire solar system converted into one big quantum computer (lets just assume that the computational system was able to acquire the initial conditions and is able to communicate fast enough). So you have enough computational power to simulate every single atom, photon, and electron, and everything else in the entire world (hypothetically directly from perfect first principles). You use it to predict if pizza will be served next week. But whatever answer you get, you can still change the outcome simply because you knew in advance what it was predicted to be. And this is true even though the computer is simulating your brain itself perfectly. The problem is that the information delivered from your super computer, even though it's only streaming back a single yes/no answer (and suppose that is the only outside influence), is enough to make the prediction fail. The implication is that what you will choose to do with the information (generally) cannot be predicted (because your goal might be to contradict the prediction itself). So whether you'll have pizza next week might be undecidable (just like the Halting problem). And the real reason, is (in some cases) whether you'll have pizza depends on whether it is predicted that you'll have pizza, which means the predictor would need to simulate itself as a sub-problem, which it is unable to do.
Anyway, so here is an example of the way this could be used in a story: There is a group which has access to this ultra powerful computer like described, and it has predicted that someone is going to do something they want to stop in the future. They know specifically how, when, and where it is supposed to happen, and they have basically one known chance at stopping it. Now the group has to be really careful not to do anything to change the prediction from coming true up to that one moment where they plan their attack or the simulation will need to be recomputed (which might be unfeasible). The simulation should be capable of perfectly simulating the whole world, with the exception that, if they act on any information provided returned to them by the simulation, there is chance it will break (because it would have to have been a self referential prediction, which it would fail at). They might get lucky though, if what they do based on the information doesn't change the important part of the simulation result. The most sensitive part of the system that might mess things up, is what the target experiences. They so much look at the target differently based on the simulation result, it could ruin everything. So there is a delicate battle that boils down to keeping the simulation itself out of the light cone of this critical event.
There is some additional stuff to use to make it confusing. Like, they know the simulation halted. But if it had simulated itself, it would not have. So they know it didn't truly simulate itself. But if it didn't simulate itself, then there are two possibilities (1) the real world series of events didn't depend on its prediction and it was able to predict that, or (2) the machine predicts that the events will depend on it's prediction, but it knows that it cannot simulate itself as a sub-problem in time, however, it realizes that if it deceives the recipient of its result, it will be able to successfully predict the complete future of the planet. So the machine is actually not only predicting the future, but deliberately manipulating the future deceptively (because that is the only possible way it is able to complete its task of predicting the future). The problem is that it can only do this if it keeps the true result secret until after it actually happens.
Last edited: