hutchphd said:
If one guesses that there is an infinite reqression of simulacra within simulacra, can one argue as to complexity increase or decease?
My feeling is that very deep levels of simulations within simulations (in the way some are assuming) would be very unlikely (impossible?) to be supported by classical computation, due to complexity increases. But we can't easily prove it (##P \neq NP## would be a start), and quantum computing is a little different, and the universes could be large and old. Scott Aaronson has commented on it,
It seems to me that, to get from here to there, you’d need to overcome
four huge difficulties, anyone of which would be fatal by itself, and which are logically independent of each other.
- As a computer scientist, one thing that leapt out at me, is that Ringel and Kovrizhin’s paper is fundamentally about computational complexity—specifically, about which quantum systems can and can’t be simulated in polynomial time on a classical computer—yet it’s entirely innocent of the language and tools of complexity theory. There’s no BQP, no QMA, no reduction-based hardness argument anywhere in sight, and no clearly-formulated request for one either. Instead, everything is phrased in terms of the failure of one specific algorithmic framework (namely QMC)—and within that framework, only “local” transformations of the physical degrees of freedom are considered, not nonlocal ones that could still be accessible to polynomial-time algorithms. Of course, one does whatever one needs to do to get a result.
To their credit, the authors do seem aware that a language for discussing all possible efficient algorithms exists. They write, for example, of a “common understanding related to computational complexity classes” that some quantum systems are hard to simulate, and specifically of the existence of systems that support universal quantum computation. So rather than criticize the authors for this limitation of their work, I view their paper as a welcome invitation for closer collaboration between the quantum complexity theory and quantum Monte Carlo communities, which approach many of the same questions from extremely different angles. As official ambassador between the two communities, I nominate Matt Hastings.
- OK, but even if the paper did address computational complexity head-on, about the most it could’ve said is that computer scientists generally believe that BPP≠BQP (i.e., that quantum computers can solve more decision problems in polynomial time than classical probabilistic ones); and that such separations are provable in the query complexity and communication complexity worlds; and that at any rate, quantum computers can solve exact sampling problems that are classically hard unless the polynomial hierarchy collapses (as pointed out in the BosonSampling paper, and independently by Bremner, Jozsa, Shepherd). Alas, until someone proves P≠PSPACE, there’s no hope for an unconditional proof that quantum computers can’t be efficiently simulated by classical ones.
(Incidentally, the paper comments, “Establishing an obstruction to a classical simulation is a rather ill-defined task.” I beg to differ: it’s not ill-defined; it’s just ridiculously hard!)
- OK, but suppose it were proved that BPP≠BQP—and for good measure, suppose it were also experimentally demonstrated that scalable quantum computing is possible in our universe. Even then, one still wouldn’t by any stretch have ruled out that the universe was a computer simulation! For as many of the people who emailed me asked themselves (but as the popular articles did not), why not just imagine that the universe is being simulated on a quantum computer? Like, duh?
- Finally: even if, for some reason, we disallowed using a quantum computer to simulate the universe, that still wouldn’t rule out the simulation hypothesis. For why couldn’t God, using Her classical computer, spend a trillion years to simulate one second as subjectively perceived by us? After all, what is exponential time to She for whom all eternity is but an eyeblink?
https://www.scottaaronson.com/blog/?p=3482
This was in response to this article claiming to have proven increase in time complexity disproves the simulation hypothesis, which Scott is refuting. Scott thinks the simulation hypothesis is un-falsifiable. It might be important to carefully define falsifiable though. We could probably find strong evidence to suggest that a particular variant of the hypothesis is false under certain assumptions.
Abstract
It is believed that not all quantum systems can be simulated efficiently using classical computational resources. This notion is supported by the fact that it is not known how to express the partition function in a sign-free manner in quantum Monte Carlo (QMC) simulations for a large number of important problems. The answer to the question—whether there is a fundamental obstruction to such a sign-free representation in generic quantum systems—remains unclear. Focusing on systems with bosonic degrees of freedom, we show that quantized gravitational responses appear as obstructions to local sign-free QMC. In condensed matter physics settings, these responses, such as thermal Hall conductance, are associated with fractional quantum Hall effects. We show that similar arguments also hold in the case of spontaneously broken time-reversal (TR) symmetry such as in the chiral phase of a perturbed quantum Kagome antiferromagnet. The connection between quantized gravitational responses and the sign problem is also manifested in certain vertex models, where TR symmetry is preserved.
https://advances.sciencemag.org/content/3/9/e1701758
My personal feeling is that, it may be (if we live in a simulation), that quantum information and quantum supremacy could be due to hidden classical information/computation leaking through the simulation into our reality. So it could be that in the core reality, there is no exploiting quantum strangeness to do better than classical computation, and we only perceive that because we don't see the whole picture.
In other words, maybe some level above us has some trick, that we can't imagine really, like we have quantum computing. And maybe the one above that also has some trick. But maybe all of those tricks are just exploiting processing done at the level above, and ultimately all of the information/processing is accounted for classically, or by whatever type of processing is truly supported in the base reality.
Either way, I think that the deeper the level of simulations within simulations, the more you would probably lose, so to speak. Maybe the simulated universes would get smaller and or less accurate or rich, or time will pass much slower. In actuality, if you did buy the probability argument, it may be that the probability of being in level ##n## is exponentially smaller than being in level ##n-1##. Even if it's not exponential, but only a constant increase in run time/space at each level, if the initial simulation is finite, then it would either bottom out eventually, or asymptotically, time would be approaching a stand still.
But the base reality could be really really big, or even infinite. And time could be infinite. So what could we say then? Like Scott says,
'
For why couldn’t God, using Her classical computer, spend a trillion years to simulate one second as subjectively perceived by us? After all, what is exponential time to She for whom all eternity is but an eyeblink?'
Of course, simulating a universe as rich as yours, but at a much slower rate, would also probably imply that the universe is not all computed simultaneously in parallel. So for example, some parts would be waiting/slowed/frozen while others are updating. We don't have an absolute reference for time, and have relativity of simultaneity. So that seems on the surface sort of consistent. If it were to be the case that we seemed to have absolute simultaneity and time, then maybe that would be evidence against a simulation hypothesis.
It's also interesting to think about how the simulators would plug-in, so to speak, if the simulated reality is exponentially slower than the base reality. Could they somehow experience time differently while in the simulation? It would be pretty boring to plug into, or even watch a simulated reality if it took several years in your time to see 1 second of theirs.
We could also just have a many simulation hypothesis, that doesn't rely on recursion, but just a bunch of universes each with one level of simulation. Or there could be only one core reality and one simulation for all we know.