New FQXI essay contest Is Reality Digital or Analog?

In summary: But I am assured that this will be done when I have found out the laws by which the motions of the heavenly bodies are governed."In summary, the new essay contest, "Is Reality Digital or Analog?" is open to participants. The paper that sparked this contest suggests that spacetime could be simultaneously discrete and continuous, in the same way that information can. The lecture by Achim Kempf from September 2009 offers a interesting perspective on this idea. The contest is open to participants, and the winner will be determined by the best model that accurately describes all known phenomena and predicts new phenomena that are experimentally proven.
  • #1
chronon
500
0
New FQXI essay contest "Is Reality Digital or Analog?"

It's been a long wait but it's here at last

http://www.fqxi.org/community/essay

(It was actually launched 4 days ago - they seem to keep it pretty quiet)
 
Physics news on Phys.org
  • #2


chronon said:
It's been a long wait but it's here at last

http://www.fqxi.org/community/essay

(It was actually launched 4 days ago - they seem to keep it pretty quiet)

I started a related thread here, a couple of weeks ago:
https://www.physicsforums.com/showthread.php?t=440638

It was sparked by the appearance last month of this paper:

http://arxiv.org/abs/1010.4354
Spacetime could be simultaneously continuous and discrete in the same way that information can
Achim Kempf
(Submitted on 21 Oct 2010)
"There are competing schools of thought about the question of whether spacetime is fundamentally either continuous or discrete. Here, we consider the possibility that spacetime could be simultaneously continuous and discrete, in the same mathematical way that information can be simultaneously continuous and discrete. The equivalence of continuous and discrete information, which is of key importance in information theory, is established by Shannon sampling theory: of any bandlimited signal it suffices to record discrete samples to be able to perfectly reconstruct it everywhere, if the samples are taken at a rate of at least twice the bandlimit. It is known that physical fields on generic curved spaces obey a sampling theorem if they possesses an ultraviolet cutoff. Most recently, methods of spectral geometry have been employed to show that also the very shape of a curved space (i.e., of a Riemannian manifold) can be discretely sampled and then reconstructed up to the cutoff scale. Here, we develop these results further, and we here also consider the generalization to curved spacetimes, i.e., to Lorentzian manifolds."

and also by this online video lecture by Achim Kempf, from September 2009:
http://pirsa.org/09090005/
It's an excellent talk, I think---well worth watching. The talk had a title similar to the paper that just appeared:
Spacetime can be simultaneously discrete and continuous, in the same way that information can.
 
Last edited:
  • #3


From a philosophy-of-physics viewpoint, the essay contest question might seem in a way naive.

Physical models could be seen as inferential (rather than ontological.) That is to say that the mathematical models physicists construct have to do with making measurements and relating them to other measurements---making forecasts, inferences, game strategies, etc.

The empirical information one has about the world, what one infers from that, and then can check by measuring, has admittedly a kind of discrete flavor. But it seems to me that when it comes to inferential math models one is free to use whatever mathematics works: discrete or continuous.

If some math element is continuous, and it works, OK great! If it is discrete and it works, amen, so be it! Let twerps and dweebs worry about what nature "really is", a pragmatist might say :biggrin:, what we really want is the most successful model we can make of our relationships to nature, the give-and-take of measurement and how she responds to it. I don't think I'm ready to state my personal position on this---maybe some of the commentary here and the FQXI essays will clarify the issue. Here are a couple of hypothetical laws, presented merely as conjecture.

Law One: an efficient inferential model will always seem like a description of the fundamental substance of being. But we'd be credulous suckers to actually believe it is.

Law Two: the illusion of fundamental reality will be strongest if the model impresses us as elegant.
 
Last edited:
  • #4


marcus said:
Law One: an efficient inferential model will always seem like a description of the fundamental substance of being. But we'd be credulous suckers to actually believe it is.

Law Two: the illusion of fundamental reality will be strongest if the model impresses us as elegant.

An implication of these two laws is that a theory that could be seen as the simplest possible theory or the most elegant possible theory that also accurately describes all known phenomena and predicts new phenomena that are experimentally proven will be accepted as a description of the fundamental substance of being.

Even if it isn't.
 
  • #5


inflector said:
An implication of these two laws is that a theory that could be seen as the simplest possible theory or the most elegant possible theory that also accurately describes all known phenomena and predicts new phenomena that are experimentally proven will be accepted as a description of the fundamental substance of being.

Even if it isn't.

And judging from past experience with theories (like Newt. mech.) that were the most elegant of their day and passed all the tests for a while, it probably isn't.

But I only meant there would be a strong temptation, not that everybody would accept.

Your comment reminded me of what Newton said:
http://en.wikipedia.org/wiki/Hypotheses_non_fingo

I take it that he was insisting that his law was inferential, relating observations and measurements. He was refusing to be forced to hypothesize some levers and gears explanation of why it worked. Here is the translation:

"I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction."

But I think your knowledge is at least as broad as mine and your insight at least as acute. So I'm ready to listen, and alter my position on this stuff---if you have some reasoned advice.

Basically I'm thinking that theories of spacetime quantum geometry are likely to have equivalent discrete and continuous versions. And that the correct answer to the FQXI question is YES.

Is reality discrete or continuous? Yes it is discrete or continuous, and most likely both. In the sense that you can probably fit it equally well with discrete math models and with continuous math models. And if one seems to have advantage for now that is apt to be temporary.

What is your take on it, Inflector?
 
Last edited:
  • #6


marcus said:
But I think your knowledge is at least as broad as mine and your insight at least as acute. So I'm ready to listen, and alter my position on this stuff---if you have some reasoned advice.

Basically I'm thinking that theories of spacetime quantum geometry are likely to have equivalent discrete and continuous versions. And that the correct answer to the FQXI question is YES.

Is reality discrete or continuous? Yes it is discrete or continuous, and most likely both. In the sense that you can probably fit it equally well with discrete math models and with continuous math models. And if one seems to have advantage for now that is apt to be temporary.

What is your take on it, Inflector?

I think that the YES answer is correct, though not perhaps in the same way you meant it. So I think the best answer would be BOTH. But not because one could model equally well with discrete models and with continuous models, but because the best model will end up being one that is both discrete and continuous at the same time within the same model.

There is a lot of work in particle physics that models matter as excitations in a field, I don't understand all the math yet, but it is clear that many scientists, like Frank Wilczek for example, consider this the most current perspective. Fields are continuous. Excitations are discrete.

But I believe it goes further than that. I believe that reality is inherently discrete and continuous at the same time and that we will eventually end up with a theory of quantum gravity that reflects this dual nature. I guess you might say that I believe the wave-particle duality—which is just a specific case of a continuous-discrete duality—is fundamental and therefore that neither aspect is emergent.

This is the reason that I think quantum gravity is so hard and why after 70 years of research we still don't have a good theory for quantum gravity. We've been looking for an answer to the FXQI question: is reality continuous OR discrete? And popular approaches to quantum gravity have assumed one OR the other answer, but not BOTH.

General Relativity is continuous and quantum mechanics is discrete. The major quantum gravity efforts have been trying to quantize gravity because they see the mathematics of QM as fundamental. A small number of less common approaches have looked into ways of getting an emergent QM out of continuous geometry. I think these approaches will have difficulty reconstructing reality because they are all missing one part of the fundamental continuous-discrete duality, the part they left out. For string theory and LQG, it is the continuous.

I am interested in quantum gravity approaches that start with the duality itself as fundamental. For example, with a spacetime geometry that is itself based on mathematical structures that are both continuous and discrete at the same time, rather than purely discrete like string theory or LQG. A field built on such a structure might be continuous at the larger scales while the interactions on the quantum scale would be between discrete excitations.

I haven't studied the math for this sort of thing well enough yet to know how many different structures would have the requisite traits but I can think of at least one in 4D so I believe the idea is worth further study, and it is very possible that upon further study, I'll find a field of mathematics that describes a class of mathematical structures that meet the dual-nature requirements. Any pointers or ideas for where to look would be appreciated.
 
  • #7


marcus said:
Physical models could be seen as inferential (rather than ontological.) That is to say that the mathematical models physicists construct have to do with making measurements and relating them to other measurements---making forecasts, inferences, game strategies, etc.

The empirical information one has about the world, what one infers from that, and then can check by measuring, has admittedly a kind of discrete flavor. But it seems to me that when it comes to inferential math models one is free to use whatever mathematics works: discrete or continuous.

I fullly agree that there is nothing wrong with humans working with continuum models in the way you phrased it there...

...but I'll throw in my perspective on the "inferential model" view.

There are at least two versions of this as I see. Most people seem to refer to the weak version, which leads to natural conclusions that Marucs phrased like if the continuum models happened to be fit an most elegant then why not?

What I characterize as the weak version means that we see physical models of human science as inferential models. IE. they have evolved as human scientific knowledge evolves and are at any instant the most fit description of our understanding, that further guides our own actions in the scientific work - with this I mean that it is certain EXPECTATIONS that has motivated us to build particle accelerators and colliders, telescopes etc.

But I insist that there is a much deeper view of the inferential perspective that comes with the stronger version of the view. It suggests that not only human science, and knowledge are to be understood as "inferential", but that ALL PHYSICAL interactions are to be seen as inferential. This is a MUCH stronger and deeper vision than the weak version.

In this latter perspective, we are forced to try to understand representation theory and computation from a physical perspective. Here it should in principle yield falsifiable predictions as to wether the "physical inference" works with discrete or continuum pictures. It's here I find the arguments for discreteness the strongest. But this kind of discreteness has nothing to do with the naive discretness such as thinking of spacetime as physical chunks of something - the discreteness is more in the inferential exchange between the parts.

Also accepting the weak version and rejecting the strong version has the objectionable trait to bring HUMANS in as a physical system that somehow doesn't follow the same logic as other systems - this indicates also that something isn't consistent with the weak version.

/Fredrik
 
  • #8


In addition if it wasn't obvious:

The weak version implies that physical laws - AS KNOWN BY HUMANS - are evolving. Which they do, this is really undeniable, just look at history.

The strong version impleis that physical law is evolving in a much deeper sense, it's not just human knowledge that evolves, the law itself evolves. (associate to Smolins and Ungers evolution of law)

/Fredrik
 
  • #9


Fra said:
that ALL PHYSICAL interactions are to be seen as inferential.

To expand a bit.

This idea would mean that the classification of interactions like we do today in the form of the standard model, should be understood as one-2-one to an abstraction based on classification of inference systems and their interactions.

ET Jaynes and others has seen inferences suchs as bayesian probability as an extension to logic, and this is how ET Jaynes "derived" the axioms of probability from assumption that seen natural to rational reasoning. But Jaynes starts by assuming a connection to the continuum and degrees of plausability.

If you then take the view that inference is a generalisation of logic, and rethink what Jaynes did, maybe degrees of belief are not really best labeled by real numbers but maybe instead by bounded integers then you get something a little different. Where the inability to actually distinguish the continuum makes a difference to the action of the system, if you assume that the action of a physical system is rationally responding to it's infered expectations about its' environment.

For a second I thought that maybe it would be fund to try to write something and submit to that essay competition but I feel that I have do to more work first, so that one are able to take this from the philosophy domain or (equally bad) pure math domain, to the predictive domain.

/Fredrik
 
  • #10


From your extract of Achim Kempf's arxiv paper:
marcus said:
The equivalence of continuous and discrete information, which is of key importance in information theory, is established by Shannon sampling theory: of any bandlimited signal it suffices to record discrete samples to be able to perfectly reconstruct it everywhere, if the samples are taken at a rate of at least twice the bandlimit.
I don't particularly like this argument for a couple of reasons. The first is that the sampling theorem requires the signal itself to be not just continuous but smooth. A signal that takes on only discrete values is not band limited. Discretizing a sampled signal inherently loses information.

The second reason is the desire on the part of some physicists to prove the universe is computable. A continuous universe is not computable. The sampling theorem is of no help here. A perfectly sampled band limited signal is not computable; almost all of the samples will be non-computable numbers. A computable universe would entail everything being finite in extent and discrete in nature. Time would have to have a definite beginning. Now that some physicists are starting to talk about "before the big bang", even the concept of finite time is perhaps suspect.
 
  • #11


D H said:
I don't particularly like this argument for a couple of reasons. The first is that the sampling theorem requires the signal itself to be not just continuous but smooth. A signal that takes on only discrete values is not band limited. Discretizing a sampled signal inherently loses information.

I didn't comment previously but I don't like that argument either.

How do you a priori KNOW that the signal is band limited? you don't of course. In real measurements this you truncate the bandwith with an antialiasing filter to not corrupt the sampling with higher frequencies to get aliasing effects.

As I see the most obvious way in which continuous and discrete is NOT equivalent is when you try to comput or evaluate an expectation or action; here it makes a difference to wether the background or (space of possibilities) is continuous or discrete. Sometimes as we know it's even ambigous on how to DEFINE this; beacuse exactly how to you COUNT possibilities if the set of possibilitites is uncountable? :) IF you play with limits, then the way to take the limits does matter. Here the discrete way is more well defined. The continuum as I see it in this "counting context" simply represents a redundancy, that I have not found a way to make sense of physically.

/Fredrik
 
  • #12


inflector said:
I think the best answer would be BOTH. But not because one could model equally well with discrete models and with continuous models, but because the best model will end up being one that is both discrete and continuous at the same time within the same model...

I believe that reality is inherently discrete and continuous at the same time and that we will eventually end up with a theory of quantum gravity that reflects this dual nature. I guess you might say that I believe the wave-particle duality—which is just a specific case of a continuous-discrete duality—is fundamental and therefore that neither aspect is emergent.

“Both” makes sense to me too... given that there are so many different ways in which quantum intertwines discreteness with continuity. E.g. the angular momentum of a particle can only be “up” or “down” – but we can choose to measure it on a 360-degree continuum of angles. This is just a single parameter – should we call it “discrete” or “continuous”?

From a functional standpoint, it seems clear that you need some kind of basic discrete structure to define anything at all like our universe. At the atomic level, without quantized energy levels there would be no stable electron orbits. And how are we going to get the basic particle symmetries from theory containing only continuous fields? On the other hand, a purely digital theory may imply a kind of precision that doesn’t seem to exist in our world – given that physics is basically about observables, and no physical observation is mathematically exact. To include continuous parameters in a theory doesn’t necessarily imply the ultimate reality of a well-defined continuum of values – it may just be a way to reflect the fundamental “uncertainty” in the world’s information-structure... if information about a physical system only has a determinate value in relation to another system that “observes” it.

In any case, physics demonstrates a remarkable variety of ways to combine analog and digital information – so the either/or of the FXQI question does seem “naive”.
 
  • #13


is a polynomial function -say, of integer coefficients- digital or analog?. For instance, f(y)=4 x^2 - 3 x
 
  • #14


Fra said:
How do you a priori KNOW that the signal is band limited? you don't of course. In real measurements this you truncate the bandwith with an antialiasing filter to not corrupt the sampling with higher frequencies to get aliasing effects. /Fredrik

The argument for invoking sampling theory is that the bandlimitation immediately follows from the presence of a minimal length. Wether there actually is a minimum physically relevant distance in nature is of course unsettled, but arguments for its existence follow pretty straigthforwardly from the principles of GR together with quantum theory. Of course generally, continuous and discrete representations are not equivalent. However IF there is a minimal length, the sampling theorem applies due to resulting bandlimitation. An example where this is the case is noncommutative geometry (not in the sense of Connes) and the generalized uncertainty relations arising from string theory.

Maybe you've been aware of this, but that's why I find the idea intriguing. As a plus, the gravity action can simply be written as Tr(1) in the eigenbasis of the Laplacian, giving the gravitational path integral a much simpler form.
 
  • #15


I'm not sure if I can convey this but a little more thought:

If you're referring to minimal length such as Planck scale and formation of black holes then you are IMHO relying on higher constructs that in a fundamental rethinking aren't yet defined. It's suggestive and indeed a hint, but in my world not quite an acceptable argument in a new reconstruction.

My reasoning would more be that the action of a given observer, must be indifferent to wether there is an continuum abstraction where the distinguishable structures could be consistently embedded. I think a rational action must be be invariant with respect to such embedding because the embedding can't be represented. It's just the same requirement that my decisions are independent on information that is unknown to me.

Somehow, when considering sets of possibilities, the continuum of possibilities simply represents IMO an non-inferrable state (due to limited processing time), and non-representable state(due to limited storage capacity).

It seems like the only reasonable starting point is a finite inference system with finite resources, and the action of this system - as far as to what's decidable at all - couldn't possibly depend on some imagined embeddings. I just find it a completely irrational thought.

/Fredrik
 
  • #16


arivero said:
is a polynomial function -say, of integer coefficients- digital or analog?. For instance, f(y)=4 x^2 - 3 x

I guess you mean that it's both. If x is real then the functiona is analog valued, but one can still argue perhaps that the space of all such functions is discrete (parameterized by sets of integers), but from the inference perspective I would ask about this in a different way:

If x is real, could any computer(observer) infer that this functional relation holds for all real values in it's domain, with a finite number of operations, or from a finite interaction history?
I think not - so the problem is not the integer, it's that the process whereby a relation defined on an uncountable set would seem to require an uncountable amount of interactions - if we think that interactions are registered by distinguishable events - and an infinite amount of information.

I think in reality both the domain of x and the value domain are also discretized. The actual continuum thing never exists, except in the mathematical sense. But I think when one looks as the finestructure of the action the fact that it's only an approximation should be revealed.

/Fredrik
 
  • #17


According to experiment with ultra cold neutrons:
arXiv:hep-ph/0306198
Archil Kobakhidze claims that Gravity is not an entropic force - http://arxiv.org/abs/1009.5414 - 27 Sep 2010 because of a quantum state formed in the slit.
Does it mean the quantisation dominates on a quantum level and it is digital ?
 
  • #18


Fra said:
It seems like the only reasonable starting point is a finite inference system with finite resources, and the action of this system - as far as to what's decidable at all - couldn't possibly depend on some imagined embeddings. I just find it a completely irrational thought.

Of course this still doesn't forbidd effective embeddings such what I think marcus referred to, it's nothing "wrong" with that, it's just that such continuum models seem to represent a extrapolation into the continuum that lacks physical significance, and thus doesn't help in any way (on the contrary does it obscure the reconstruction by various apparent-only freedom). This is why I tend to think that continuum constructions although consistent-with what we seek, seems a little confused in it's guidance.

/Fredrik
 
  • #19


It may be possible to derive discrete values from a continuum theory. But you cannot derive continuous values from discrete set. So if every measurable thing (including spacetime) ultimately comes only in discrete values, does that prove that the universe is discrete? Or, if every value of discrete measurements is predicted by a continuum theory, does that prove that the universe is continuous?

If spacetime is discrete, then there are discontinuities from one place to another. If there are discontinuities, then there is a loss of causality from one place to the next. For a discontinuity by definition is that something that concerns one place does not concern the next, so that information does not cross a discontinuity. These are my thoughts on the subject.
 
Last edited:
  • #20


friend said:
It may be possible to derive discrete values from a continuum theory. But you cannot derive continuous values from discrete set. So if every measurable thing (including spacetime) ultimately comes only in discrete values, does that prove that the universe is discrete? Or, if every value of discrete measurements is predicted by a continuum theory, does that prove that the universe is continuous?

If spacetime is discrete, then there are discontinuities from one place to another. If there are discontinuities, then there is a loss of causality from one place to the next. For a discontinuity by definition is that something that concerns one place does not concern the next, so that information does not cross a discontinuity. These are my thoughts on the subject.

Kevin Knuth - 27.Sept.2010
http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.5161v1.pdf
"Inspired by Cox, I have been working to understand how to derive calculi from
algebras in general by selecting consistent quantification schemes for partially-ordered
sets and lattices. At one level, this more fundamental understanding has resulted in
a much simpler derivation of the product rule that might have been more to Jaynes’
liking. However, at a deeper level, we now understand how constraints imposed by
ordering relations can result in the derivation of physical laws. This recently has been
demonstrated with a novel derivation of the complex arithmetic in Feynman’s path
integral approach to quantum mechanics [10, 11] as well as a derivation of special
relativity from a partial order on a set of events [12]. Each of these examples is related to
information in a different way. In some examples the connection to information is direct
as we consider a partial order on states of knowledge themselves. However, we have
also employed these ideas by considering the partial order that arises from the way that
events can be informed about one another or the partial order that arises from composing
sequences of measurements aimed at gaining information."

It seems that digital quantum events can be informed about one another .
 
  • #21


Thanks for your thoughts. Here are some comments from my perspective.

friend said:
But you cannot derive continuous values from discrete set.

The continuum is of course recovered by limits of discrete systems for which makes sense for "sufficiently complex" systems. Just like the dedekind construction introduces real numbers from rational ones.

The difference is only, wether the ACTUAL LIMITS exists in nature, or wether nature just gets close enough.

friend said:
So if every measurable thing (including spacetime) ultimately comes only in discrete values, does that prove that the universe is discrete?

In some ontological realist sense, I think the answer is no of course. It proves nothing. But OTOH, what the universe "really is" is the wrong question. We need not & should not answer that IMO :)

friend said:
If spacetime is discrete, then there are discontinuities from one place to another.

To define a discontinuity, one needs a (more) continuous background. If there is no such background, I would not say there is discountinouity, just DISCRETIZATION (which is not the same thing I'd say) since no one would be able to distinguish anything in between.

friend said:
If there are discontinuities, then there is a loss of causality from one place to the next.

I think that discreteness is not objective, and therefore there are certain loss of decidability, but this is not a problem, I see it as a feature of nature. The undecidability is what warranty a for of locality that one systems decisions and actions are independent of what can't be distinguished. That doesn't mean that the decisions and expectations are always right, but even flawed expectations determined the action. What's beyond "expectations" are deeply undecidable. I think that's how nature works.

/Fredrik
 
  • #22


How would one even create a theory of a world that is not computable? If the purpose of a theory is to derive predictions, and computation (whether carried out by a human mind or any other computing device) is the way to arrive at those predictions, a non-computable world seems inherently unpredictable, and hence, one can't formulate a theory in the ordinary sense about it. A non-computable physics seems an awful lot like mysticism to me: one would have to somehow intuit the theory's answers to problems posed to it, in the absence of the possibility of any algorithmic, step-by-step derivation. Even if one argues, as some philosophers do (though mostly because of a misunderstanding of Gödel's theorems), that the mind isn't computable (and hence, there might be some non-computable understanding of non-computable physics possible), every calculation I have ever written down -- and every calculation I will ever write down -- certainly could have been equally well carried out by a perfectly ordinary Turing machine.

It's related to the problem of how to prove hypercomputation -- if somebody hands me a device and claims that it is capable of such a feat, how do I check this? In order to do so, I would need to already have a device I know to be capable of hypercomputation; otherwise, what would I do? It would certainly be possible to disprove this claim, by asking the machine to decide whether or not an algorithm halts: if it claims that it won't, yet the algorithm, if run, actually does, then the machine is not a hypercomputer. The converse runs into obvious problems, however.
 
  • #23


S.Daedalus said:
How would one even create a theory of a world that is not computable?

Or even more fundamentally, how could a non-computable universe even exist? If there is no machine that could possibly come up with the answers for the next state, then not even the universe could manipulate the bits of information about the current state in order to move to the next state.
 
  • #24


S.Daedalus said:
How would one even create a theory of a world that is not computable? If the purpose of a theory is to derive predictions ...
You appear to have a misunderstanding of what computable means. Newtonian mechanics and general relativity are obviously non-computable theories. The quantum mechanics I learned in college was a non-computable theory. Even the standard model of physics is a non-computable theory. Any theory that involves derivatives in its underlying mathematics is inherently a non-computable theory because almost all real numbers are non-computable.

That does not mean I can't compute something useful based on those theories. The trivial initial value problem f'(x)=f(x), f(0)=1 has an obvious and well-known solution, for example. All is not lost just because almost all real numbers are not computable numbers. There are an infinite number (countably infinite) of computable numbers within any neighborhood of a non-computable real number. In other words, my computations can be made arbitrarily close to the true value.
 
  • #25


What you say is true, but I have something different in mind. As you note, in practice, every prediction actually derived within any current theory is derived within a computable approximation to the theory -- which entails that within these theories, one could never decide whether reality is 'actually' computable or non-computable, as there is always a computable version of any theory that agrees with observation to any desired degree -- meaning that it always is possible that underneath it all exists a computable reality, which calls into question that there is a 'true value' beyond what can be computed (or, at the very least, the meaning of this 'true value').

Also, if I'm not mistaken, attempts to create models of hypercomputation within these theories generally yield unphysical infinities (quantum hypercomputers needing an infinite superposition of states, for example).

But things could, in principle, be different -- the outcome of a physical process could depend explicitly upon solving an uncomputable problem. No clever example readily springs to mind, though one could conceivably construct something depending on the final state on some Zeno machine, or upon whether or not Thompson's lamp is on or off after two minutes, but the gist of it is that in such a theory, in principle, there does not exist a computable approximation -- after two minutes, the lamp is (presumably) either on or off, and no computable theory can tell you which, whereas a non-computable theory should be able to, though I can't imagine how.

That's then the dividing line between computable and non-computable theories for me: within the latter, devices capable of solving the halting problem may exist, whose behaviour is such that it can't be worked out, for instance, using pen and paper (i.e. computable means). If reality were such that it was described by such a non-computable theory, then prediction using any standard means would be in principle impossible, since such prediction is then equivalent to solving the halting problem.
 
  • #26


There are various flavours of these terms. I'm not necessarily thinking in terms of some strict computer science definition. I tend to prefer the term "inferrrable" which is pretty much similar but there are two types of inference, certain DEDUCTIVE inference, and uncertain INDUCTIVE type inference.

In the general case I refer to, the DEDUCTIVE part of the inference (or computations) correspond to the expectations (ie. predictions). There is also a part of the inductive part that isn't deductive, this corresponds to undecidability.

With inferreable with respect to an oberver, I mean that the observer has the requires computational and information capacity to exectue the inference. And the most important thing for me is that I think the that observers EXPECTED action deponds ONLY upon the inferrable part. Any inconsistency between expectations and future outcomes result in corrective "forces" that causes evolution, also of the "computing hardware" itself. This can never by deductively predictable, and does not need to. It's a fact that a decition maker does not net to be certain of something in order to act. What we "compute" correspond to the best inference or quest guess, which least to "rational action".

For me at least computability or inferrability is highly tied to the inference context, or computer. There is no objective meaningful notion of this. but again, that's not needed. All that's needed for a rational action is a relative notion. It's just that a specific computer/observer can compute/infer sometrihng or not. Whatever it can't infer - should render the action of the system/observer invariant. (or at least that is MY basic conjecture)

/Fredrik
 
  • #27


what I find interesting is to then try to interpret that physical subsystems - act towards their environment - in consistencywhich such a "computational" idea, that contains undecidable elements.

This is then connection I make with the OT. Digital or analog.

Maybe we are able to infer; interaction classification between subsystems by parametrizing their complexity (~energy/mass) and find that as ew approch the high energy domain, the number of logically POSSIBLE computations are reduced and may even converges to... some TOE point?

This entire thinking to me at least, simple doesn'y make any sense in terms of continuum pictures. The analog picture seem to be just a limiting case, where the complexity is so large that it IS continuous for all practical purposes, except for such fundamental rethinking that this thread is about. Then it seems to matter to me.

/Fredrik
 
  • #28


D H said:
The quantum mechanics I learned in college was a non-computable theory. Even the standard model of physics is a non-computable theory.
Thinking about it some more, I'm actually not too sure that I would consider these non-computable theories. Sure, they're defined over the non-computable set of the reals, but entirely in terms of computable functions, no? Abstracting somewhat and considering a theory as something that computes final states from initial values, as long as those initial values are computable, the final states will be, too; and we can't enter any non-computable initial values, since all values we can enter are obviously finite. So we actually have computable functions from computable reals to computable reals, in the end.

A non-computable theory then would to me entail non-computable functions, which, for example, could take us to the non-computable reals from the computables, and for which, in general, one would have no expectation of finding a computable approximation.

(Fra, sorry for talking past you somewhat, but I'm afraid I don't understand your ideas well enough to make any kind of useful comment on them. :smile:)
 
  • #29


S.Daedalus said:
Thinking about it some more, I'm actually not too sure that I would consider these non-computable theories. Sure, they're defined over the non-computable set of the reals, but entirely in terms of computable functions, no?
No. Allowing the continuum means that the universe is not computable. Computable here has a specific meaning and is related to the answer to the question "Is the universe a giant computer (a giant digital computer, to be specific)?"

In my opinion this line of thinking borders on psychoceramics, except in this case there are some extremely well-regarded physicists who do think the universe *is* digital. My opinion: The universe is a giant computer: A giant hybrid computer to be specific. The universe is not computable.
 
  • #30


D H said:
No.
Could you elaborate? Is there any non-computable function (in the usual terms, i.e. function not computable by any Turing machine, non-recursive function, etc.) within present-day physics?

Of course the continuum is in principle not computable, but it's not actually used, as far as I can tell. All calculations I ever performed (and all I can ever see myself performing) certainly went from computable numbers to computable numbers.

(And as far as I am aware, hybrid computers -- at least, ones that can actually be built -- are equivalent to Turing machines, and thus completely computable. One might speculate about some 'real' computer, which is in theory capable of hypercomputation, but generally thought to be physically impossible.)
 
  • #31


We use digital computers to approximate solutions to systems of differential equations. They do not solve them. They give answers that are very close to a solution. Way back in the day, scientists used analog computers that at least conceptually could yield exact solutions to systems of different equations.

You are assuming that the universe is computable and then using that assumption to prove that the universe is computable. That of course is not a valid line of reasoning.
 
  • #32


D H said:
No. Allowing the continuum means that the universe is not computable. Computable here has a specific meaning and is related to the answer to the question "Is the universe a giant computer (a giant digital computer, to be specific)?"

Just because we humans cannot calculate to infinite precision doesn't mean that there cannot be a relationship between infinitesimal distant spacetime points. Perhaps all of physics is derived from nothing more than a conjunction of an infinite set of points, each infinitesimally distant from its neighbor with no other relationship to calulate implied. To suggest that there are any gaps at all between adjacent points creates the delemma of asserting that something that is not real (the gaps) actually exists.
 
  • #33


friend said:
Just because we humans cannot calculate to infinite precision doesn't mean that there cannot be a relationship between infinitesimal distant spacetime points.

I agree, BUT, as I see it, the question is how any observer/system can make the inference and establish this? My conclusion is that such an inference must involve an infinity of processing and representation - making it unphysical. This is what I mean with non-inferrable.

Something beeing non-inferrable, doesn't mean it doesn't exists.

friend said:
creates the delemma of asserting that something that is not real (the gaps) actually exists.

Yes, but as I see it at least, it's irrelevant wether it exists, if the action of the computer only depends on it's current state of knowledge/information.

Thus, the observers action, should be invariant with respect to wether whatever is "in between the points" exits or not :) So there is no dilemma as I see it.

/Fredrik
 
  • #34


Just like (a bit simplified)

~ an electrically neutral particle simply doesn't respond to en electric
field, that particle is quite indifferent to wether there IS an an electrical field or not.

~ all particle though, respond to gravity (which is a hint), no particle can be indifferent to gravity

~ a poker players instant actions is indifferent to what cards the othre players REALLY have, becuase all that matters is what cards he THINKS they have

There are several analogies like this... I think this extends also to the digital/analog question we discuss here.

I think the key, we must not forget, that the question is not in some unscientific sense wether there "IS" a continuum or not. The real queston, as I see it is HOW a physical systems ACTS. Ie the action. This is what makes a different.

If a system acts as if it has discrete information, THEN that is what's physical. To embedd this into some continuum model simply doesn't add anything useful. It rather blurrs up what's continuum ghosts and what's physically distinguishable states as judged in the sense as how the action is.

I'd say that MY actions, is indeed coloures by MY own ignorance and incompleteness. So as far as my rational action concerns, it's completely irrelevant to ponder wether there is something that I do not know about that should (if I knew it) affect my decisions :)

/Fredrik
 
  • #35


D H said:
We use digital computers to approximate solutions to systems of differential equations. They do not solve them.
If an analytic solution is possible, a (digital) computer is as capable as a scientist to solve it. And if not, then it might just be that no exact solution exists -- not for you, not for the computer, not for nature. Indeed, such a solution corresponds to hypercomputation (at least in some cases). Take the gravitational many body problem: if you had an exact solution, you could solve the halting problem. Why? Because you can build a computer out of such a system (IIRC, a three body system suffices), and with your analytic solution, could find out for any given evolution of the system whether it ever reaches some halting state.

They give answers that are very close to a solution. Way back in the day, scientists used analog computers that at least conceptually could yield exact solutions to systems of different equations.
Conceptually, yes, if you neglect finite measurement precision. In practice, analog computers work maybe to three or four digits precision.

You are assuming that the universe is computable and then using that assumption to prove that the universe is computable. That of course is not a valid line of reasoning.
I just observe that every process in the universe (that we've come across so far, at least) can be arbitrarily well approximated using computational means -- i.e. that no matter how big my magnifying glass, I could not observe any deviation that I could not in principle compute. That reality then is computable is just the most conservative hypothesis to go with. The real numbers might have seemed a natural choice when they were first introduced into physics; however, from today's perspective, with the benefit of a theory of computation, they seem like almost alchemistic constructs.

And I'm still not sure I get in what way you claim that Newtonian mechanics, etc., are non-computable theories. One can completely recast these theories in terms of Turing machines, or partial recursive functions from initial to final values. That they are formally defined over the reals plays no role, as everything we do with them happens entirely within a computable subset thereof; and neither does the fact that they involve differentiation, which is in the end just a limit, which exists precisely if it is computable. In the end, anything that can be reduced to manipulating symbols on a sheet of paper according to a certain set of rules is computable, and I don't think there's any theory that can't be thus reduced.
 
<h2>1. What is the FQXI essay contest about?</h2><p>The FQXI essay contest is an annual competition organized by the Foundational Questions Institute (FQXI) that invites scientists, philosophers, and other thinkers to submit essays on a specific topic related to the foundations of physics and cosmology. This year's contest is focused on the question of whether reality is digital or analog.</p><h2>2. Who can participate in the FQXI essay contest?</h2><p>The FQXI essay contest is open to anyone who is interested in the topic and has a passion for exploring fundamental questions about the nature of reality. This includes scientists, philosophers, students, and other thinkers from all around the world.</p><h2>3. How do I submit an essay for the FQXI contest?</h2><p>To submit an essay for the FQXI contest, you must first register on the FQXI website and create a profile. Once you have registered, you can submit your essay through the online submission form. The essay must be written in English and should be between 1,500 and 10,000 words in length.</p><h2>4. What is the deadline for submitting an essay for the FQXI contest?</h2><p>The deadline for submitting an essay for the FQXI contest is June 6, 2021 at 11:59 PM Eastern Daylight Time (EDT). Late submissions will not be accepted, so it is important to submit your essay before the deadline.</p><h2>5. What are the criteria for judging the FQXI essays?</h2><p>The FQXI essays will be judged based on their originality, clarity, rigor, and relevance to the topic. The judges will also consider the overall quality of the writing and the depth of the author's understanding of the subject. It is important to present a well-reasoned argument and support it with evidence and references to relevant research.</p>

1. What is the FQXI essay contest about?

The FQXI essay contest is an annual competition organized by the Foundational Questions Institute (FQXI) that invites scientists, philosophers, and other thinkers to submit essays on a specific topic related to the foundations of physics and cosmology. This year's contest is focused on the question of whether reality is digital or analog.

2. Who can participate in the FQXI essay contest?

The FQXI essay contest is open to anyone who is interested in the topic and has a passion for exploring fundamental questions about the nature of reality. This includes scientists, philosophers, students, and other thinkers from all around the world.

3. How do I submit an essay for the FQXI contest?

To submit an essay for the FQXI contest, you must first register on the FQXI website and create a profile. Once you have registered, you can submit your essay through the online submission form. The essay must be written in English and should be between 1,500 and 10,000 words in length.

4. What is the deadline for submitting an essay for the FQXI contest?

The deadline for submitting an essay for the FQXI contest is June 6, 2021 at 11:59 PM Eastern Daylight Time (EDT). Late submissions will not be accepted, so it is important to submit your essay before the deadline.

5. What are the criteria for judging the FQXI essays?

The FQXI essays will be judged based on their originality, clarity, rigor, and relevance to the topic. The judges will also consider the overall quality of the writing and the depth of the author's understanding of the subject. It is important to present a well-reasoned argument and support it with evidence and references to relevant research.

Similar threads

Replies
3
Views
2K
  • Beyond the Standard Models
2
Replies
51
Views
10K
Replies
62
Views
26K
  • Beyond the Standard Models
Replies
2
Views
3K
Replies
1
Views
1K
  • Beyond the Standard Models
Replies
12
Views
3K
  • Beyond the Standard Models
Replies
10
Views
3K
  • General Discussion
Replies
6
Views
1K
Replies
12
Views
2K
  • Beyond the Standard Models
2
Replies
35
Views
10K
Back
Top