New Idea/Twist for a Time Travel-Like Story

  • Thread starter Jarvis323
  • Start date
  • Tags
    Time
In summary, the movie Time-Lapse takes an interesting approach to time travel by sending information back in time as photographs. It still creates paradoxes, such as the surprise exam paradox. However, the idea is that the information coming from the future isn't actually from the future, it's from an ultra accurate (borderline perfect) prediction of the future. This means that it's not time travel at all anymore, but it still comes with the same paradoxes.
  • #1
Jarvis323
1,243
986
I watched the movie Time-Lapse recently, which took an interesting approach to time travel. Instead of people/things traveling back in time, information is sent back in time (in the form of photographs). Like conventional time-travel, it still creates paradoxes. For example, if you change the future based on your knowledge of the future, then your knowledge of the future would have been wrong since the future had been changed. In the movie, if they deviate from the future so that the photo from the future isn't taken, then they would never have had the photo in the first place. So you're left with some options: everything magically changes to adjust (like Back to The Future), reality splits into multiple separate timelines, or free will doesn't exist and everything works out in some really coincidental way.

The idea/twist I thought of is that the information coming from the future isn't actually from the future, it's from an ultra accurate (borderline perfect) prediction of the future. Of course this means it's not time travel at all anymore. However, one interesting thing is that, supposing you could predict the future perfectly, you would be able to change the future based on the prediction and prevent the prediction from coming true (just the same as in Time-Lapse). So predicting the future is equivalent to time travel of information, and it comes with the same paradoxes.

I'm sure this has been explored before, but one thing I found interesting is the relationship to the Halting problem, and Gödel's Incompleteness Theorem, both of which arise from problems with self-referential dependencies (although not exclusively). Another paradox you can examine is the surprise exam paradox, which is invoked as soon as you predict that some event will come as a surprise before a specific day. All of these could be integrated into the story as food for thought. It's interesting because it relates time travel paradoxes nicely with theory in mathematics and computation.

Anyway, where it gets interesting to me is when I try to come up with a way to just make it work (predict a future event perfectly). You have to suppose the universe is deterministic first I guess. What we know about simulation is that a system can simulate itself, theoretically, but it will necessarily lag behind reality itself so it's no use for prediction. However, you could try something like predicting extremely trivial events of a mostly isolated sub-system using the resources of a separate and much larger sub-system. For example, you could make a quantum Matrioshka brain, which is an entire solar system converted into one big quantum computer (lets just assume that the computational system was able to acquire the initial conditions and is able to communicate fast enough). So you have enough computational power to simulate every single atom, photon, and electron, and everything else in the entire world (hypothetically directly from perfect first principles). You use it to predict if pizza will be served next week. But whatever answer you get, you can still change the outcome simply because you knew in advance what it was predicted to be. And this is true even though the computer is simulating your brain itself perfectly. The problem is that the information delivered from your super computer, even though it's only streaming back a single yes/no answer (and suppose that is the only outside influence), is enough to make the prediction fail. The implication is that what you will choose to do with the information (generally) cannot be predicted (because your goal might be to contradict the prediction itself). So whether you'll have pizza next week might be undecidable (just like the Halting problem). And the real reason, is (in some cases) whether you'll have pizza depends on whether it is predicted that you'll have pizza, which means the predictor would need to simulate itself as a sub-problem, which it is unable to do.

Anyway, so here is an example of the way this could be used in a story: There is a group which has access to this ultra powerful computer like described, and it has predicted that someone is going to do something they want to stop in the future. They know specifically how, when, and where it is supposed to happen, and they have basically one known chance at stopping it. Now the group has to be really careful not to do anything to change the prediction from coming true up to that one moment where they plan their attack or the simulation will need to be recomputed (which might be unfeasible). The simulation should be capable of perfectly simulating the whole world, with the exception that, if they act on any information provided returned to them by the simulation, there is chance it will break (because it would have to have been a self referential prediction, which it would fail at). They might get lucky though, if what they do based on the information doesn't change the important part of the simulation result. The most sensitive part of the system that might mess things up, is what the target experiences. They so much look at the target differently based on the simulation result, it could ruin everything. So there is a delicate battle that boils down to keeping the simulation itself out of the light cone of this critical event.

There is some additional stuff to use to make it confusing. Like, they know the simulation halted. But if it had simulated itself, it would not have. So they know it didn't truly simulate itself. But if it didn't simulate itself, then there are two possibilities (1) the real world series of events didn't depend on its prediction and it was able to predict that, or (2) the machine predicts that the events will depend on it's prediction, but it knows that it cannot simulate itself as a sub-problem in time, however, it realizes that if it deceives the recipient of its result, it will be able to successfully predict the complete future of the planet. So the machine is actually not only predicting the future, but deliberately manipulating the future deceptively (because that is the only possible way it is able to complete its task of predicting the future). The problem is that it can only do this if it keeps the true result secret until after it actually happens.
 
Last edited:
Physics news on Phys.org
  • #2
Jarvis323 said:
The problem is that the information delivered from your super computer, even though it's only streaming back a single yes/no answer (and suppose that is the only outside influence), is enough to make the prediction fail.
Therein lies the problem. The proposal to simulate one system using a larger system fails if they can interact, because then the larger system has to also simulate itself, in order to take into account any influences it may subsequently apply to the smaller system.

That's one of a number of reasons why the idea of being able to 'perfectly predict the future' fails on a theoretical as well as a practical level.
Jarvis323 said:
The simulation should be capable of perfectly simulating the whole world, with the exception that, if they act on any information provided returned to them by the simulation, there is chance it will break
An interesting obstacle arises from this, because of the lack of a clear definition of 'act on'. What does it mean? If we receive a piece of advice to do A rather than B, and we follow it, despite having originally intended to do B, it's easy to agree we have 'acted on' it. What about if we do B just because we like to confound people's expectations? Isn't that also 'acting on' it, because we chose how to act based on having received the advice (if the advice had said 'Do B', we'd have done A, just to be contrary)? More generally, how can we ever tell whether, and how, a piece of info we received influenced a decision we made? I venture to suggest we can't. Even if the info seems irrelevant to the decision facing us, it will still affect our subconscious decision-making process, via microscopic alterations to mood, timing, temperature. In short, the butterfly effect!

There are a number of folk tales about how people, by trying to change the predicted future, fulfil it. One is the person who sees the Grim Reaper in town X in the morning, thinks he's come for him, and immediately flees to town Y, scores of kilometres away. Arriving there in the evening, she is collected by the reaper as soon as he arrives. She protests 'But you were in Town X!' The reaper replies 'Yes, and I was surprised to see you there, since we had this appointment here this evening'. Another is the story of Osko - or some similar name. He receives a prophecy that he will die in Dallas. He decides never to go to Dallas. Years later he is flying from Philadelphia to Denver when a message goes over the cabin intercom that the plane has been diverted to Dallas, the nearest major airport, because of bad weather in Denver. Terrified, Osko tries to hijack the plane. This causes various bad things to happen which end with the plane crashing in an attempted emergency landing at Dallas.
 
  • Like
Likes PeroK
  • #3
Jarvis323 said:
The idea/twist I thought of is that the information coming from the future isn't actually from the future, it's from an ultra accurate (borderline perfect) prediction of the future.

I've a thick novel I'm using as a monitor stand called Wanderers by Chuck Wendig...

...and this is essentially the twist in the novel. Which sounds pretty amazing, but what was more amazing - almost too much to bear, to be honest - was the rest of the story which involved nanotech of a particular kind that really made no sense for the purpose of the threat being faced. The book was compared to a Steven King thriller and while it was as thick as most of King's, it sagged in the middle the way King's stories rarely do.

If you like sci-fi, I don't recommend it, it's just a little too frustrating in the way the science miracles accrue when a dozen simpler, less obvious, and less dramatic methods could be used to get the cast to where they need to be (yes, I know drama is the essence of a thriller, but still 😡).

Jarvis323 said:
Anyway, so here is an example of the way this could be used in a story: There is a group which has access to this ultra powerful computer like described, and it has predicted that someone is going to do something they want to stop in the future.

Well, Asimov's highly successful Foundation stories feature psychohistory, in which the future is predicted at large scales, which is a similar concept, though I don't recall if messages from "the future" played a part, it's been a long time since I read those.

Are you planning on writing a story of this kind, @Jarvis323? Or merely looking for thoughts on the idea?
 
  • Like
Likes Quarkman1
  • #4
Tghu Verd said:
I've a thick novel I'm using as a monitor stand called Wanderers by Chuck Wendig...

...and this is essentially the twist in the novel. Which sounds pretty amazing, but what was more amazing - almost too much to bear, to be honest - was the rest of the story which involved nanotech of a particular kind that really made no sense for the purpose of the threat being faced. The book was compared to a Steven King thriller and while it was as thick as most of King's, it sagged in the middle the way King's stories rarely do.

If you like sci-fi, I don't recommend it, it's just a little too frustrating in the way the science miracles accrue when a dozen simpler, less obvious, and less dramatic methods could be used to get the cast to where they need to be (yes, I know drama is the essence of a thriller, but still 😡).
Well, Asimov's highly successful Foundation stories feature psychohistory, in which the future is predicted at large scales, which is a similar concept, though I don't recall if messages from "the future" played a part, it's been a long time since I read those.

Are you planning on writing a story of this kind, @Jarvis323? Or merely looking for thoughts on the idea?
I'm not planning on writing anything. I don't have the time or the experience. I've wanted to write a science fiction novel before, but I don't think that it's my fate.
 
  • #5
Jarvis323 said:
And the real reason, is (in some cases) whether you'll have pizza depends on whether it is predicted that you'll have pizza, which means the predictor would need to simulate itself as a sub-problem, which it is unable to do.

It should at least be possible for a limited number of iterations. Maybe it is even possible for cases where recursive predictions do not alternate but converge to a stable result.

Have you read "The Minority Report" from Philip K. Dick? It is not based on computational prediction but on precognition and in contrast to your story it is the target itself that gets the prediction. However, it deals with predictions of a future that is impacted by predictions of the future. It might be interesting for you because it includes the active use of a higher-order prediction to get the predicted result. Your group™ could run iterative simulations, using the result of previous simulations as input, until they get what they want. Than they just need to put it into practice. The question why they don't do that (e.g. limited resources) or why it would still not be fail-safe (e.g. because the very next iteration, based on the predicted success, could result in a fail) would at least give an interesting dialog.
 
  • #6
DrStupid said:
It should at least be possible for a limited number of iterations. Maybe it is even possible for cases where recursive predictions do not alternate but converge to a stable result.
Yes, that's why I added (some cases). It might be calculable that your choice of pizza will be independent (nothing will stop you from eating pizza). Another case is that it is calculable how you will respond to either result, so the machine can choose its prediction arbitrarily (steering the future that it is predicting). It also may be computable that you will match that chosen prediction (so the machine doesn't even need to lie, and it can reveal the true prediction). But it may also be necessary that the machine must choose the prediction and lie about it.

DrStupid said:
Have you read "The Minority Report" from Philip K. Dick? It is not based on computational prediction but on precognition and in contrast to your story it is the target itself that gets the prediction. However, it deals with predictions of a future that is impacted by predictions of the future. It might be interesting for you because it includes the active use of a higher-order prediction to get the predicted result. Your group™ could run iterative simulations, using the result of previous simulations as input, until they get what they want. Than they just need to put it into practice. The question why they don't do that (e.g. limited resources) or why it would still not be fail-safe (e.g. because the very next iteration, based on the predicted success, could result in a fail) would at least give an interesting dialog.

Thanks for the recommendation. I haven't read the book.

It would be interesting to go deeper into this kind of thing using computability theory. There are 3 kinds of undecidable problems: ones that are recursively enumerable (RE), ones that are co-recursively enumerable (co-RE), and ones that are neither. RE problems are those where you can always compute a yes answer in a finite time (if the answer is yes), but if the answer is no, the program will run forever and never find a solution. An example of an RE problem is finding the solution to an arbitrary Diophantine equation; if a solution exists, brute force search will eventually find it; if there is no solution the search will go on forever. co-RE is the opposite, we can compute it if the answer is no, but not yes. If the problem is RE, and you must give an answer, you might try running the simulation for a long time, and then stopping and just guessing no if you haven't found one by then.

Anyway, not sure if there is a great way to incorporate these concepts. For example, maybe there is an adversarial relationship between the machine and the protagonist, where the machine is trying to control and or predict the future, and the protagonist is trying to sabotage its success. It might be more interesting if it is just the machine that wants to control the future (e.g. it has a plan it is trying to realize). If that plan is discovered somehow by the protagonist, it could make things even more interesting. The protagonist could also maybe access the machines predictions covertly, but would be unable to tell if they were deceptive, or which parts are the machines choices. By delving deeply into computabiity theory, maybe the protagonist could figure out some provable way to make the future (or at least some events of the future) undecidable.

There could also be a twist where the narrative is following, or switching between, the simulated world and the real one without the reader knowing it at the time, or where it turns out the protagonists discover they are actually the simulated ones.
 
Last edited:
  • #7
With regards to viewing or predicting the future, you can "get around" paradoxes by being extremely selective about how much you view, and so never being able to influence what you viewed.

For example, it's impossible to predict a geological event in 6 months time - an earthquake can strike without warning. So, you look into the future to see when the earthquake hits, and as soon as you know, you stop looking. You don't look at anything else. Then you evacuate the place before the earthquake.

If you only saw the Earth shaking and a bin fall over, showing a newspaper with the date on it, beneath a clock which had the time on it, then stopped looking, you didn't see whether the city was evacuated, only the time & date of the quake. As a result, you can't have changed it.

I guess it's akin to the Schrodingers Cat thought experiment. As long as you only influence the things you didn't see, then you haven't changed anything.

Taking it into the "sending information back in time" aspect, imagine if you could send data back in time, based on a set of instructions written at the data's destination - IE, you can send back all the information you have about right now to precisely 12 hours ago, unless you read in the instructions (which can be coded into the machine to do this automatically) to not send the data. The instructions are written 12 hours ago, when the data is analysed and they see a trigger that they need to respond to. Assuming an information-heavy world where everything is networked, that could be that a train has just declared that its brakes have failed. The past-group declare this a situation to correct, and write instructions to stop the feed. They can't stop the brakes from failing, but they now have no information on the outcome of the event - meaning they are free to slow the train down in any way. Then they can restart the feed 12 hours after they have intervened - meaning that by the time the feed starts coming through again, they will have finished with the train and so will not gain any knowledge on the outcome of it.

This can then lead to complex situations, critical to story-telling, like seeing the person you love's parachute fail, or some other unavoidable fate. Perhaps a villain learns how they work so makes a poison which takes 24 hours to work, or you see that you get poisoned in 12 hours, and know you cannot prevent it because it's been seen.
 
  • #8
some bloke said:
I guess it's akin to the Schrodingers Cat thought experiment. As long as you only influence the things you didn't see, then you haven't changed anything.

I also like to see the future is a superposition of all possible events that comply with the Novikov self-consistency principle. Every additional information (no matter whether about past, present or future) eliminates all possibilities that do not comply with it. But you still have the change to choose between the remaining possibilities. It prevents paradoxa and keeps a rest of free will.

However, that turns real time machines (not just computed predictions of the future) into some kind of gambling machines. I wonder if the use of such a device could change the probability of events if you turn it into the equivalent of a loaded dice by by preferring specific information to be sent into the past. Let's say only information about events with possibly catastrophic effect are sent back, in order to prevent the desaster they might result in. Whenever that happens, all possibilities without this event are eliminated. If the use of the time machine results in the tendency to eliminate futures without potential desasters and to fix futures with potential desasters in advanced, could that increase the probability of desasters?
 
  • #9
DrStupid said:
I also like to see the future is a superposition of all possible events that comply with the Novikov self-consistency principle. Every additional information (no matter whether about past, present or future) eliminates all possibilities that do not comply with it. But you still have the change to choose between the remaining possibilities. It prevents paradoxa and keeps a rest of free will.

However, that turns real time machines (not just computed predictions of the future) into some kind of gambling machines. I wonder if the use of such a device could change the probability of events if you turn it into the equivalent of a loaded dice by by preferring specific information to be sent into the past. Let's say only information about events with possibly catastrophic effect are sent back, in order to prevent the desaster they might result in. Whenever that happens, all possibilities without this event are eliminated. If the use of the time machine results in the tendency to eliminate futures without potential desasters and to fix futures with potential desasters in advanced, could that increase the probability of desasters?

I wouldn't say that fixing disasters would result in an increased likelihood of future disasters, but then in my idea you wouldn't prevent the disaster, you would be ready o react to it and have people evacuated etc ahead of time.

It could be woven into a string of coincidences, where people say "wow, it's so lucky that this random guy dropped all his shopping in front of me at the crossing, if he hadn't I'd have been in the way for the train". Or someone saying "I don't understand how all the train tickets were booked, there was no-one on there, but I'm glad they were or I'd have been on that train!".

So the disaster happens, but miraculously (and due to a lot of seemingly insignificant events) no-one was hurt.

Your suggestion (If I'm not mistaken) is that as soon as that knowledge is learnt, all possibilities which do not feature the foreknowledge are removed from the equation. My assumption is that the people trying to help must do so with minimal information in order to be able to affect the outcome.

For example: If they received information from the future that the train had crashed and 500 people had died, then they couldn't stop the 500 people from dying without creating a paradox - the information has to come back. The only way they could avoid the paradox is to receive info on 500 deaths, prevent the deaths, then send the info on 500 deaths back in time for themselves, closing the loop.
Alternatively, they could receive info from the future that the train crashes, and then that's all. At that point, it could kill millions or no-one, the possibilities are all open.

How would your system hold up in a paradox? If a person received word that they would die in a certain place in 7 days, what happens if they try to avoid it? If they have any knowledge of their own future, then it comes crashing down. The person interacting has to be outside of the system for it to work.
 
  • #10
some bloke said:
Your suggestion (If I'm not mistaken) is that as soon as that knowledge is learnt, all possibilities which do not feature the foreknowledge are removed from the equation. My assumption is that the people trying to help must do so with minimal information in order to be able to affect the outcome.

That’s my assumption too. I wonder that might have a negative effect. Let me explain it a bit more in detail:

Let’s says there are three possibilities:

A: Nothing special happens.

B: There is an alarming event (e.g. train brakes fail) but 500 people that would otherwise die can be saved (thanks to the information about the event that has been transmitted into the past).

C: There is an alarming event (e.g. train brakes fail) and the mission save the 500 people fails.

The probabilities of the eventswithout time-travel are pA, pB and pC with the probability of 500 kills beeing pB + pC because nobody is there to save the people.

Now you start your time-travel rescue program, trying to minimize the information sent back into the past. That means for example that only the information about the failed train brakes are send into the past but not about the 500 victims. That means only the brake fail gets fixed but there remains a change to save the 500 people. So far so good.

I am concerned about the fact that you also do not post back that noting special happens. In the worst case that could mean that whenever B or C is possible, there will be a transmission into the past that eliminates A. That actually means that A will always be eliminated, because it is always possible that something goes wrong. In the result you get pA’=0 and pB’ + pC’ = 1 for the case with time-travel. That means that you get spammed with warnings as soon as the program starts. No day without imminent disaster anymore.

For a successful program the probability pC’ of failed missions must remain below the probability pB + pC of disasters without time travel. That might be a problem.
 
  • #11
I think I see what you're saying but I'm not sure I reach the same conclusions as you do.

If nothing happens then nothing happens, there's no reason for something to happen to fill that gap.

The tricky thing is that the information would have to be transmitted in real-time from the future, to be interpreted in the past. Decisions would have to be made on the data before the next piece of data was opened and analysed, so that they can decide whether to stop the stream.

For example, if they receive a data packet every minute, then they have to read the packet and see "train brakes failed" as happening at this time - and in the future, the signal was sent when the brakes had failed but the outcome hadn't happened yet. At this point they make a note for the machine to stop sending data after the brakes failed. In the future, they might watch as the brakes fail, message is sent back, stop the data, see the heroes swoop in and save the day, restart data. In the past, they receive the data on the brakes, make a note for the machine to stop the data feed at that point, and then wait until it happens and swoop in and save the day.

If the data is being sent back in real-time, IE it always arrives 1 week before it left, then it's perfectly feasible for them to receive inane data on traffic jams and other day-to-day occurrences which don't warrant intervention.

The probabilities of things occurring will not be affected by the response of someone who knew it would happen. Seeing a bus careening out of control down a street and pushing someone out of its way is fundamentally no different to knowing that the bus was going to do that since last week and pushing someone out of its way. If it was, then every time someone witnessed something bad about to happen (that crane's leaning a lot, that boat is going too fast, that car driver is on their phone) and reacted, it would cause an increased probability of it happening again!
 
  • #12
some bloke said:
The tricky thing is that the information would have to be transmitted in real-time from the future, to be interpreted in the past. Decisions would have to be made on the data before the next piece of data was opened and analysed, so that they can decide whether to stop the stream.

OK, that solves the problem. This is like opening boxes with Schrödinger-Cats after a specific time. The decision not to read furhter information is like stopping opening boxes. In that case the cats in the colsed boxes remain in their superposition of dead and alive.

I was under the impression that there is a decision made in the future which information to send back and that only alarming information are selected. That would be like a mechanism that opens the box if Schrödingers Cat is dead. In the superposition of dead and alive both options would do their job: The alive-option does nothing and the dead-option kills the cat. But with only one possible result there would be no superposition anymore. The cat would always be doomed.
 
  • #13
Spider Robinson wrote a short story Fivesight, ( one of his Callahan's Crosstime Saloon tales) which involved a man who got premonitions of future events, ( As the title suggests, he called his ability "fivesight", as it was one better than foresight.).
The catch was, that while he could use his knowledge to prepare for such events ( for example, if he foresaw himself cutting his finger, he could make sure to have band-aids and antiseptic ready, if he tried to astually prevent the event, it would be replaced with something worse. Thus if he tried to keep himself from cutting his finger, he would end up injuring himself more in some other way.
 
  • #14
I think that an alternative hypothesis though could be that if you sent information back (like how bad Hitler would turn out to be, or telling Lincoln not to go to Ford Theatre, etc.) is that the information might be construed as a hoax or such and dismissed by those who might be empowered to act upon it. And who's to say that if you were to prevent (as Janus eludes to above) something happening (say Hitler dies in the trenches in WWI) that something else doesn't happen with someone even worse? The facts we 'know' also don't preclude that someone sent back in time information that, as bad as an event or actor in events are, the future was worse if they did die/change the course of what we now know as "history?"

Asimov did a great job in describing predicting future events with a broad brush in the Foundation series, one of my favorite "go-to" science fiction series of all time. Given Asimov's education, one can see the realistic standpoint he relied upon to use "math to predict large scale changes" and it was quite compelling. I can't say with any certainty who would be worse than Hitler, or what consequences might have arisen if JFK had lived (or not gone to Dallas, etc.), but the idea is an interesting one that perhaps, in the world of probabilities, something worse might have happened, or something that a certain actor, event, or realization might have triggered far worse events taking place. Perhaps something that might trigger the end of civilization as we currently know it. It's most likely, IMHO, that many such 'predictions' might have been scoffed at and discarded, later those that "knew" what was foretold just opted to forget they knew about it and took it to their graves.

There's an interesting pseudo-time travel story by Connie Willis called To Say Nothing of the Dog that also explores time travel in a more limited scope and not driving world and cataclysmic events. Perhaps small, fractional changes may lead to larger instances with a broader scope. It's been many years since I read that. It's a good read. Therein may lie the true foundation of "time-travel" and influence -- the proverbial butterfly flapping its wings in Japan hypothesis touted in chaos theory. Still, an interesting thought experiment to ponder over.

As Niels Bohr once remarked, "Predictions are hard, especially about the future."
 
Last edited:

What is the concept of time travel?

Time travel is the theoretical concept of moving between different points in time in a non-linear fashion. It suggests that an individual or object can travel to the past or future, either through technological means or natural phenomena.

What is a common plot twist in time travel stories?

A common plot twist in time travel stories is the idea of altering the past and creating a ripple effect that changes the present or future. This can lead to unexpected consequences and moral dilemmas for the characters.

How do paradoxes play a role in time travel stories?

Paradoxes, such as the grandfather paradox, often play a role in time travel stories. These are situations where an action in the past could potentially alter the future in a way that contradicts the original time travel event.

What are some challenges in creating a unique time travel story?

One challenge is avoiding clichés and tropes that have been used in previous time travel stories. Another challenge is maintaining consistency and logic within the rules and mechanics of time travel in the story.

How can a time travel story be made more scientifically accurate?

To make a time travel story more scientifically accurate, it is important to consider the current theories and limitations of time travel in physics. This can involve incorporating concepts such as relativity, wormholes, and the multiverse theory into the story.

Similar threads

  • Science Fiction and Fantasy Media
Replies
8
Views
1K
Replies
3
Views
1K
  • Science Fiction and Fantasy Media
Replies
12
Views
2K
  • Science Fiction and Fantasy Media
Replies
22
Views
2K
  • Science Fiction and Fantasy Media
Replies
28
Views
2K
  • Mechanics
Replies
15
Views
1K
  • Science Fiction and Fantasy Media
Replies
31
Views
4K
  • Quantum Physics
Replies
4
Views
804
  • Science Fiction and Fantasy Media
Replies
12
Views
1K
  • Computing and Technology
Replies
1
Views
993
Back
Top