A Quantization isn't fundamental

  • #51
The discussions here was interesting, as they made me realize more how differently we all think about these foundational issues.
DarMM said:
Hence all Realist models have fine-tunings.

What one can now do is attempt to show the fine-tuning is dynamically generated, but you can't avoid the need for it.
In the extended meaning i used before, even the standard model as it stands encodes a realism of symmetries. And these symmetries are used as deductive constraints when we construct theories. This is the poweful methods the theoretical framwork of QFT rests on. But my perspective is that this power is deceitful. As the choice of the constraints can be seen as a fine tuning in theory space. So we do not only have the fine tuning of initial conditions, we have also the fine tuning of laws. This is a big problem i see, and dynamical fune tunings could then not follow a timeless law, as that is the metalaw dilemma Smolin talks about.

Instead some kind of evolution, that does not obey dynamical LAW, seems needed. And this way of phrasing it naturally unifies initial states, and the state of law. As I see if none of them should be identified with ontic states. So I think these realis ontologies already lead us into trouble, even if we do not involve HV realist models. So even those that reject bohmian mechanics, but embrace the theoretical paradigm of standard model are still IMO in trouble.

As has been mentioned already, these finetunings are alreadyt solved by nature, if physicists only would learn from biology. The state space in biology is now timeless fixed, its evolving, but not according to physical law. The one critique one can have about this at first is; so what, how can we get more predictive from this insight? That is the question I ask. And the reason why Smoling mentions ins CNS, so just set an example of showing that one prediction is that the insight means we can use the evolutionary traits such as survival, reproduction and self-organisation as soft sub-constraints to replace the HARD deductive constraints of timeless symmetries. And try to reconstruct the measurement theory as per this. Here the deductive machinery of an observer, is necessarily an evolved inference system which is more abductive, NOT deductive. But compressed sensing also means that even the inference systems itself is truncated, and when you truncate a soft inference, it looks like more exact like a deductive system, because you discarded the insiginificant doubts from reflections.

The discussions on here made me realize exactly how much headache the entanglement and nature of non-commutative observables causes. If we can not find a conventional "realist model", we need to find another plausible common sense way of understanding htis. And i think that is possible.

/Fredrik
 
  • Like
Likes Auto-Didact
Physics news on Phys.org
  • #52
DarMM said:
Genuinely I really don't get this line of discussion at all. I am not saying initial condition fine tuning is an older concept (I know when Newton or Boltzman lived) or that in Quantum Foundations they exclusively use fine tuning to mean initial condition fine tuning.

I am saying that fine-tuning has long been used to mean both in theoretical physics and Quantum Foundations like many other areas, uses fine tuning to mean both.

In that paper I linked they explicitly mean both as "causal parameters" includes both initial conditions and other parameters if you look at how they define it.

I really don't understand this at all, I'm simply using a phrase the way it has been used for over a century and a half in theoretical physics. What does it matter if using it on a subset of its current referents extends back further?
All I am saying is that having one phrase which can mean two distinct things is unnecessarily confusing, hence me calling it unfortunate. Based on a careful reading of that paper, it seems the newer secondary usage in the literature might even seems to be correlated with and therefore reducible to the older primary usage.

This is of course assuming that a) the usage in this paper is a secondary usage and b) typical and therefore representative of the secondary usage in the literature. If readers will equivocate the effects (eg. superluminality) to the causes (parameter fine-tuning) this would naturally lead to a correlation between these terms and an evolution of this secondary usage among scientists in this subfield.

The same thing I just described tends to occur in many other academic fields and subfields. I suspect the same may be happening here, but of course I could just be mistaken.
DarMM said:
That's explicitly not true, coming up with Psi-Ontic models that model the non-entanglement part of QM is simple, even simpler than modelling it with Psi-Epistemic models. In fact Psi-Ontic models end up naturally replicating all of QM, you don't even have the blockade with modelling entanglement that you have with Psi-Epistemic models.

That's not what the theorem demonstrates, it holds for both Psi-Ontic and Psi-Epistemic models. The class of models covered includes both.
It is either you or I who is thoroughly confused - or worse, perhaps it is even both of us. This is nothing to be ashamed of in these complicated matters. These matters are immensely complicated and have literally stumped the best minds in science for over a century including Einstein himself. In no way would I even rank myself close to such individuals. Complicated mathematics such as QFT or GR calculations are trivially simple in comparison with what we are discussing here.
DarMM said:
Bohmian Mechanics needs to be fine tuned (Quantum Equilibrium hypothesis), it is known that out of equilibrium Bohmian Mechanics has superluminal signalling. In the Wood-Spekkens paper they are trying to see if that kind of fine-tuning is unique to Bohmian Mechanics or a general feature of all such theories.
It turns out to be a general feature of all Realist models. The only type they don't cover is Many-Worlds. However the Pusey-Leifer theorem then shows that Many-Worlds has fine-tuning.

Hence all Realist models have fine-tunings.

What one can now do is attempt to show the fine-tuning is dynamically generated, but you can't avoid the need for it.
As I said before, there is absolutely nothing wrong with having or requiring parameter fine-tuning itself. This is reflected in the practice of bifurcation analysis of dynamical systems, wherein parameter fine-tuning is the essential strategy to identify the values of some parameter which leads to bifurcations in parameter space, i.e. to second order phase transitions and related critical phenomena. Usually in physics this is done by some kind of stability criteria exactly analogous to Valentini's Quantum Equilibrium Hypothesis; Manasson does this through an extremum principle in his paper.

W.r.t. these 'physically illegal' ideas - including many worlds - the possibility that novel schemes/models will result which display these possibilities can actually be removed a priori by explicitly choosing particular mathematics which can not model such a phenomenon and then constructing the scheme/model based on such mathematics. A theorist who realizes this can obviously take advantage of this when constructing or searching for a new model.

The same thing can also be done in reverse, i.e. if one wants to construct a scheme or model which intrinsically has some particular conceptual property eg. non-computability, one can choose to construct such a model using non-computable mathematics, such as non-periodically tiling the plane using shapes (Penrose tiling). Any resulting scheme/model based on such mathematics will then be - if successfully constructed - inherently non-computable, i.e. fine-tuned with non-computability as a resulting effect.

It is important to understand that concepts such as non-computability, superdeterminism, superluminality and retrocausality aren't themselves logically incoherent. They are instead 'illegal' w.r.t. our current conceptual understanding of physical theory based on experimental phenomenology; there is however absolutely no guarantee that our current conceptual understanding of fundamental physical theories will not be modified or replaced by some superior theories in the future, meaning it could turn out either way.

It goes without saying that this is exactly what physicists working in the foundations are trying to figure out. The related issue of whether 'physically illegal' ideas (such as superdeterminism, retrocausality and superluminality) could result from some kind of parameter fine-tuning is therefore frankly speaking completely irrelevant. Just because identifying fine-tuning is a useful strategy in order to exclude ideas in the practice of high energy theoretical particle physics doesn't mean it is useful outside of that context; as Hossenfelder argued, it isn't.
DarMM said:
I don't need a psychoanalysis or rating of what I do or do not understand. Tell me what I have misunderstood in the Pusey-Leifer or Wood-Spekkens papers. I've gone through the proofs and then rederived them myself to ensure I understood them, as well as seen the conclusion "All realist theories are fine-tuned" explicitly acknowledged in talks by Quantum Foundations experts like Matt Leifer.

See point nine of this slide:
http://mattleifer.info/wordpress/wp-content/uploads/2009/04/FQXi20160818.pdf

It's very easy to start talking about me and my comprehension, have you read the papers in depth yourself?
As I have said at the end of my other post I mean you no offense whatsoever. I'm just trying to analyze what may be the cause of these disagreements which are holding us back from coming to a resolution. If I'm actually wrong, I'd be extremely happy if you or anyone else could show me why using good arguments; optimistically it may even lead to a resolution of these devilish misunderstandings which have plagued this field for almost a century now, but I digress.

Yes, I have read the papers in depth (which is why I tend to take so long to respond). It is not that there is a mistake in the argument or that you have made a mistake in reproducing the argument; I am instead saying that to generalize (using induction) the conclusion of the argument from the particular case wherein the proof is given - based on these particular assumptions and premises - to the general case isn't itself a logically valid step. This is why these no-go theorems aren't actually intratheoretical theorems of QM or even physical theory, but merely atheoretical logical theorems about QM.

To actually make a theorem which speaks about the general case - what you and others seem to be trying to do - would require much more premises and assumptions, i.e. all the conceptual properties necessary for the mathematical construction of a theory of which QM would be a limiting case, given that such a thing exists; if you could construct such a theorem, that theorem would actually essentially be an undiscovered law of physics.

Essentially, this is exactly what I am trying to do: reasoning backwards from conceptual properties which have survived no-go theorems and then use nonstandard mathematics to construct a novel theory based on said remaining concepts. There is no guarantee such a strategy will work, but generally speaking it is a highly promising reasoning strategy which is often used to identify the correct mathematical description (usually in the form of differential equations) when dealing with black box systems.
 
  • #53
Auto-Didact said:
All I am saying is that having one phrase which can mean two distinct things is unnecessarily confusing, hence me calling it unfortunate. Based on a careful reading of that paper, it seems the newer secondary usage in the literature might even seems to be correlated with and therefore reducible to the older primary usage.
As an illustration that it means initial condition fine tuning, the quantum equilibrium hypothesis in Bohmian Mechanics are initial conditions. This is included in the type of fine-tuning discussed in the paper.

Auto-Didact said:
It is either you or I who is thoroughly confused - or worse, perhaps it is even both of us.
The proof in the paper takes place in a generalisation of the ontological models framework, defined by Spekkens himself, which explicitly includes both Psi-Ontic and Psi-Epistemic models. Psi-Ontic models are simply the case where the state space of the theory ##\Lambda## takes the form ##\Lambda = \mathcal{H} \times \mathcal{A}## with ##\mathcal{H}## the quantum Hilbert Space and ##\mathcal{A}## some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.

Auto-Didact said:
Valentini's Quantum Equilibrium Hypothesis
This doesn't affect your argument, but just to let you know it isn't Valentini's Hypothesis it goes back to Bohm, without it Bohmian Mechanics doesn't replicate QM.

Auto-Didact said:
It is important to understand that concepts such as non-computability, superdeterminism, superluminality and retrocausality aren't themselves logically incoherent. They are instead 'illegal' w.r.t. our current conceptual understanding of physical theory based on experimental phenomenology; there is however absolutely no guarantee that our current conceptual understanding of fundamental physical theories will not be modified or replaced by some superior theories in the future, meaning it could turn out either way.
Certainly, I am simply saying they must be fine-tuned. However it could be the case, for example, that the world involves fine-tuned retrocausal processes. I'm really only discussing what we can narrow down the explanation of QM to.

Auto-Didact said:
It goes without saying that this is exactly what physicists working in the foundations are trying to figure out. The related issue of whether 'physically illegal' ideas (such as superdeterminism, retrocausality and superluminality) could result from some kind of parameter fine-tuning is therefore frankly speaking completely irrelevant. Just because identifying fine-tuning is a useful strategy in order to exclude ideas in the practice of high energy theoretical particle physics doesn't mean it is useful outside of that context; as Hossenfelder argued, it isn't.
Well it's not so much that they result from fine-tuning, but proving that they require fine-tuning. Also this isn't high-energy particle physics. Quantum Foundations deals with QM as a whole, not specifically its application to high-energy particle collisions.

Auto-Didact said:
As I have said at the end of my other post I mean you no offense whatsoever
My apologies, you clearly are conducting this in good faith, my fault there. :smile:

Auto-Didact said:
the conclusion of the argument from the particular case wherein the proof is given - based on these particular assumptions and premises - to the general case isn't itself a logically valid step
What specific assumptions of the ontological framework are in doubt, i.e. what assumption do you think is invalid here?

Auto-Didact said:
To actually make a theorem which speaks about the general case - what you and others seem to be trying to do - would require much more premises and assumptions
If by the "general case" you mean "all possible physical theories" neither I nor the quantum foundations community are doing that. Most of these proofs take place in the ontological model framework or an extension there of. So something can evade the no-go results by moving outside that framework. However if a presented theory lies within it, we automatically know that it will be subject to various constraints from no-go theorems. If you aren't familiar with the onotolgical models framework I can sum it up quickly enough and perhaps you can say where your doubts are. I can also sum up some theorems that take place in one of its generalisations.
 
  • Like
Likes Auto-Didact
  • #54
DarMM said:
If by the "general case" you mean "all possible physical theories" neither I nor the quantum foundations community are doing that. Most of these proofs take place in the ontological model framework or an extension there of. So something can evade the no-go results by moving outside that framework.
From my inference perspective, the ontological framework with an ontic sample space for "the black box" seems like a very strange ansatz to start with that i do not see as a general ontological model to psi, except if you secretly just tries to get behind the observational filters nature gives us to restore realism.

After all, i assume the "ontic model of psi" does not really mean the same thing as realist model. The former phrase is a general understanding of what psi is.

First objection is that it does not seem to even reflect over the relational nature of things between observer and the system. First, who says that the ontological model of psi, is about an ontic space associated to the the system (ie the black box)? It might as well reflect the observers ontological state of information of the black box; irrespective of what is "right or wrong". Often this is ignored or called the psi-epistemtic, because its easy to jump into the conclusion that this somehow involves a human observer. It could possibly also refer to the observers actual physical state (following from interaction history, and then it does not matter if you label it measurements or preparation, it falls into the same category). This then coincides with a observer bayesian interpretation of probabilities. We need no ensembles then, just the retained information and how its been processed. However the notion of observer needs to be generalized beyond classical measurement devices, to make this idea viable. For example, two observers can even be entangled with each other, truly making the information "hidden". There are some ways to hide information without sticking to the old realist models i think. Information inside a black hole is also possible hidden, yet it can be entangled with things outside the black hole. Susskind has been talking alout about this when he used the headlines "ER = EPR" or even "GR = QM", where he argues that entanglement and the makeup of spacetime are related.

Here in lies the distinction between the psi-epistemic view within what i think you refer to as the standard ontological model framework? and what i think of as the interpretation that the only justified "ontic states" are exactly the observers physical state; and this encodes expectations of its own environment. This "ontological model" does as far as i can see not fall into the "standard ontological model framework", because the whole ansatz of the ontic sample space is alien to the constructing principles.

As I see it, a sound framework should make use only of things that are organised and recoded from possible observations; and i want to know how the ontic sample space got there in the first place, if its not due to the secrect dreams of old times. It seems to me the ontic space is not needed, it adds only confusion doesn't it?

So what I think, is that i think the viable ontic model for psi we need (not for removing MWI nightmares, but in order to make progress in unification and quantum gravity) may not be in that standard framework? So in this sense, i agree that the scope of the no-go theoriems are limited. That's not to say they areny important of course! They are a tool to dismiss candidate theories.

/Fredrik
 
  • Like
Likes Auto-Didact
  • #55
Well the quantum state being a result of the system's state and the observer/measuring device's state is actually handled in the ontological models framework, via what are called the response functions.

Plus if you actually think about it, the observer's state being a factor in determining the result doesn't help much in cases such as entanglement.
 
  • #56
DarMM said:
Well the quantum state being a result of the system's state and the observer/measuring device's state is actually handled in the ontological models framework, via what are called the response functions.

How are the response functions, and the structure of the ontic space supposed to be inferred(abduced) by the observer? It seems to be they arent inferrable? So which observer is supposed to be doing this inference?

As we know, assuming a random microstructure and they say apply an ergodic hypothesis is not innocent, as you can by the choice of conditioned partitioning sort of bias your probabilistic conclusions. This is why i object to introducing probability spaces such as a sample space that is not constructable from the observers perspective.

I think QM needs and inference ontology, not a hidden space ontology.
DarMM said:
Plus if you actually think about it, the observer's state being a factor in determining the result doesn't help much in cases such as entanglement.

That depends on what you make out of it I think.

You are right that we are getting nowhere if we just stop and say that its all just expectations of the observer. Nothing interesting happens until we allow these observers to interaction; or communicate. But this communication is a competitive game, that is also about survival. This is like the difference between QM in curved space, and QG. Its first when we try to account for the real backreaction of the environment, to the observers expectations that we get the real thing.

First, I will admit the obvious that I do not have a ripe model yet with, so perhaps I should just be silent. But the line of reasoning that i have in mind is this:

The observer is interaction with its environment, an in the general case the environment is the black box.
But what is the environment? By conjecture its abstractly populated by fellow observers.

And the conjecture here is that they are all understood as information processing agents, that follow the same rules of inference.

If we see the the action of an observer, as a guided random walk in its own state space, and the backreaction of the environment as a guided random walk it ITS statespace, what we end up with are two coupled interacting information processing agents. And and evolution will take place, that evolves the theory implicitly encoded in both sides, and the random walked gets improved guidance as the theory evolves. If not, the agent will destabilise and give up its complexions tothe environment (ie dissipate or radiate away).

So in entanglement - I envision the superposition is not seen as a property of the system(the entangle particle), but as the state of the environment. And then note that we arent just talking about Alice and Bob, but of the whole experimental setup, including slits or polarizers or whatever is in there. Ie the whole environment, encodes and thus BACKREACTS on the particle just as if it IS in superposition. And this is not challenged unless the entanglement is broken by a measurement. And if we assume that this gives the same dynamics as if the superposition was soley due to Alice and Bobs ignorance, this will give a different result, because its not how it works. Its not about Alice and Bobs ignorance, its about the entire environments support or the superposition. This is not the same thing as a hidden variable.

One can understand this conceptually by a game theoretic analogy. As long as ALL other players are convinced about something, it does not matter if its a lie, because the backreaction of the environment "supports the lie". And in its extreme, there is no way for a player to tell a stabilized lie from reality. Ultimately this means that in the inference perspective, boolean states are not objective. True or false are as deceiptive as is old time realism.

These idea are what i am taking seriously, and i think that these constraints will guide us to predict WHICH information processing structures are likely to appear in this game, if we start from zero comlpexity and built from there. Ie. this reasoning starts at highest possible energy at big bang, and then we ask ourselve which mathematical inference systems are most likely to survive if we implement these ideas? And can we harvest the various known phenomenolgoy as we reduce temperature (and this increase complexity) of the observers?

/Fredrik
 
  • #57
Fra said:
How are the response functions, and the structure of the ontic space supposed to be inferred(abduced) by the observer? It seems to be they arent inferrable? So which observer is supposed to be doing this inference?
It doesn't really matter, I mean it's not as if the form of the Ontic space affects Bell's theorem does it? You have to be more crazy (in the sense of Bohr's "not crazy enough") than this to escape the onotological models framework.

Fra said:
If we see the the action of an observer, as a guided random walk in its own state space, and the backreaction of the environment as a guided random walk it ITS statespace, what we end up with are two coupled interacting information processing agents. And and evolution will take place, that evolves the theory implicitly encoded in both sides, and the random walked gets improved guidance as the theory evolves. If not, the agent will destabilise and give up its complexions tothe environment (ie dissipate or radiate away).

So in entanglement - I envision the superposition is not seen as a property of the system(the entangle particle), but as the state of the environment.
All of this makes sense, nothing wrong with it, but it falls within the ontological models framework, so it will have to be nonlocal, retrocausal, superdeterministic or involve Many-Worlds and in addition be fine-tuned. In fact from the rest of your post what you are talking about seems to me to be superdeterminism driven by environmental dynamics.
 
  • #58
DarMM said:
It doesn't really matter, I mean it's not as if the form of the Ontic space affects Bell's theorem does it?

We don't need details,but the main point is that mere existence of the ontic space, and the conditional probability measures that connect ontic state to epistemic state and preparation, and the response functions contains non-trivial information about the matter. And this is what is used in the theorem.

Its the fact that the notic space exists with the mentioned conditional probability measures that encodes information used in the theorem. If this information does not exist, the premise of the theorem is lost.

What i suggested is that i do not see a clear justification for the ansatz. The ansatz is obvious if your mindset is tuned in on classical thinking. But if you release yourself from this, and instead think of inferences, i am not sure how you can justify the ansatz?
DarMM said:
All of this makes sense, nothing wrong with it, but it falls within the ontological models framework, so it will have to be nonlocal, retrocausal, superdeterministic or involve Many-Worlds and in addition be fine-tuned. In fact from the rest of your post what you are talking about seems to me to be superdeterminism driven by environmental dynamics.

Surely the explanatory burden is all on me to explain my reasoning, sorry!

But i don't see how you get this impression. Btw, the "rules of inference" i refer to, are NOT deductions, I actually think of them as evolved random processes. Their non-random nature are self-organised, and not left for ad hoc fine tunings. This should be as far from superdeterminism as you can get? As far as locality goes, what i suggest is explicitly local in information space, non-locality is possible only as evolved correlations. I will try to write more later, or we can drop it here. The main point was not to try to explain everything of this in detail anyway, I just do not see that this idea fits into the ontological model framework. I would insist that competitive models to QM, by no means are exhausted by that framework? To proove it explicitly i suppose nothing less than completeing it will do. So let's just leave my objection for the record ;)

/Fredrik
 
  • #59
Fra said:
We don't need details,but the main point is that mere existence of the ontic space, and the conditional probability measures that connect ontic state to epistemic state and preparation, and the response functions contains non-trivial information about the matter. And this is what is used in the theorem.
A minor technical point, I would say "this is what is used in the framework", i.e. the Ontic models framework in general and all proofs that take place in it, not just Bell's theorem.

Indeed this is what is used, as the theorem attempts to constrain Realist theories. If you reject ##\Lambda##, then you have a non-Realist interpretation, which the framework isn't about. Indeed you could see the no-go results as suggesting moving to a non-Realist interpretation/model, it's not meant to also argue against them.

Fra said:
What i suggested is that i do not see a clear justification for the ansatz
I think you might be missing the point of the framework as I discussed above. It's not to constrain all interpretations, just Realist ones, so the ansatz is justified as an encoding of the the framework's targets, i.e. Realist theories.

It would be like somebody setting out to see what constraints apply to discrete models and then objecting to their use of ##\mathbb{Z}^{d}##

Fra said:
But i don't see how you get this impression. Btw, the "rules of inference" i refer to, are NOT deductions, I actually think of them as evolved random processes.
I didn't understand then, I'd need something more concrete in order to say anything sensible, perhaps some mathematics.
 
  • #60
DarMM said:
so it will have to be nonlocal

All indications are that nonlocality is the first reason for QM, local effects are byproducts in a similar fashion to how "virtual particles" paradigm works.
 
  • Like
Likes Auto-Didact
  • #61
DarMM said:
As an illustration that it means initial condition fine tuning, the quantum equilibrium hypothesis in Bohmian Mechanics are initial conditions. This is included in the type of fine-tuning discussed in the paper.
Ahh, now I see, you were referring to initial conditions fine-tuning all the time! We are in far more agreement than it seems from the earlier discussion. The controversial nature of initial condition fine-tuning depends again on the formulation of the theory; the question is - just like with parameter fine-tuning - whether or not the initial conditions are determined by a dynamical process or just due to randomness implying issues of (un)naturalness again; this is actually a genuine open question at the moment.

Having said that, the initial conditions in question i.e. the initial conditions of our universe is precisely an area where QM is expected to break down and where some deeper theory like quantum gravity seems to be necessary in order to make more definitive statements. The degrees of freedom predicted by standard QM - standard QM being time-symmetric - is far, far larger than what we seem to see in actuality. In particular, from CMB measurements we can conclude - being a blackbody radiation curve - that there was a state of maximum entropy and that is was therefore random, but more important to note is that there seemed to be no active gravitational degrees of freedom!

We can infer this from the entropy content of the CMB. Therefore we can conclude that in our own universe, the initial conditions were in fact extremely fine-tuned compared to what standard QM (due to time-invariance) would have us believe was allowed to be ascribed to maximum entropy i.e. to randomness, this huge difference being due to no active gravitational degrees of freedom i.e. a vanishing Weyl curvutare. The question then is: what was the cause of there being no gravitational degrees of freedom active during the Big Bang?
DarMM said:
The proof in the paper takes place in a generalisation of the ontological models framework, defined by Spekkens himself, which explicitly includes both Psi-Ontic and Psi-Epistemic models. Psi-Ontic models are simply the case where the state space of the theory ΛΛ\Lambda takes the form Λ=H×AΛ=H×A\Lambda = \mathcal{H} \times \mathcal{A} with HH\mathcal{H} the quantum Hilbert Space and AA\mathcal{A} some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.
I'm very glad to announce that this is the source of our disagreement. Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
DarMM said:
This doesn't affect your argument, but just to let you know it isn't Valentini's Hypothesis it goes back to Bohm, without it Bohmian Mechanics doesn't replicate QM.
Thanks for the notice!
DarMM said:
Certainly, I am simply saying they must be fine-tuned. However it could be the case, for example, that the world involves fine-tuned retrocausal processes. I'm really only discussing what we can narrow down the explanation of QM to.
Okay, fair enough.
DarMM said:
Well it's not so much that they result from fine-tuning, but proving that they require fine-tuning. Also this isn't high-energy particle physics. Quantum Foundations deals with QM as a whole, not specifically its application to high-energy particle collisions.
I know that this isn't hep-th, I'm just presuming that the anti-'fine-tuning stance' probably originated there and then spilled over from physicists who perhaps began working in hep-th (or were influenced there during training) and then ended up working in quantum foundations.
DarMM said:
My apologies, you clearly are conducting this in good faith, my fault there. :smile:
:)
DarMM said:
What specific assumptions of the ontological framework are in doubt, i.e. what assumption do you think is invalid here?
...
If by the "general case" you mean "all possible physical theories" neither I nor the quantum foundations community are doing that. Most of these proofs take place in the ontological model framework or an extension there of. So something can evade the no-go results by moving outside that framework. However if a presented theory lies within it, we automatically know that it will be subject to various constraints from no-go theorems. If you aren't familiar with the onotolgical models framework I can sum it up quickly enough and perhaps you can say where your doubts are. I can also sum up some theorems that take place in one of its generalisations.
To avoid misunderstanding, restated: all the premises and assumptions which go into proving this theorem (and most of such no-go theorems) are not general enough to prove a theorem which is always true in physics regardless of context; an example of a theorem which is always true in physics regardless of context is the work-energy theorem. "The general case" does not precisely refer to all possible physical theories (since this would also include blatantly false theories), but rather all physical theories that can be consistent with experiment.

But as I have said above, Spekkens' definition of psi-ontology is an incorrect technical simplification. I can see where his definition is coming from but it seems to me to clearly be a problem of operationalizing a difficult concept into a technical definition, which doesn't fully capture the concept but only a small subset of instantiations of said concept, and then prematurely concluding that it does. All of this is done just in order to make concrete statements; this problem, i.e. a premature operationalization, arises when it is assumed that the operationalization is comprehensive and therefore definitive - instead of tentative i.e. a hypothesis.

These kinds of premature operationalizations of difficult concepts are rife in all of the sciences; recall the conceptual viewpoint of what was necessarily absolutely true in geometry prior to Gauss and Lobachevski. Von Neumann's proof against hidden variable theories is another such example of premature operationalization which turned out to be false in practice as shown by Bell. Here is another example by Colbeck and Renner which is empirically blatantly false, because there are actually theories which are extensions of QM with different predictions, eg. standard QM being a limiting case with the limit ##m \ll m_{\mathrm {Planck}}##; such theories can be vindicated by experiment and the issue is therefore an open question.

I do understand why physicists would (prematurely) operationalize a concept into a technical definition, hell I do it myself all the time; this is afterall, how progress in science made. However, here it seems that physics has much to learn from other sciences, namely that such operationalizations are almost always insufficient or inadequate to characterize some phenomenon or concept in full generality; this is why most sciences couch such statements in doubt and say (almost like clockwork) that more research is needed to settle the matter.

With physics however, we often see instead an offering of a kind of (false) certainty. For example, we saw this with Newton w.r.t. absolute space and time, we saw it with von Neumann w.r.t. hidden variables and we see it with Colbeck and Renner above. I suspect that this is due to the nature of operationalizations in physics i.e. using (advanced) mathematics. Here again physicists could learn from philosophy, namely that mathematics - exactly like logic (which philosophers of course absolutely adore) - can be - due to its general extremely high applicability and assumed trustworthiness - a blatant source of deception; this occurs through idealization, simplification and worse of all, by hiding subjectivities behind the mathematics within the very axioms. All of this needs to be controlled for as factors of cognitive bias of the theorist.

I should also state that these matters do not apply generally to the regular mathematics of physics - i.e. analysis, differential equations, geometry and so on - because the normal practice of physics, i.e. making predictions and doing experiments, doesn't concern the making of formal mathematics arguments utilizing proof and axiomatic reasoning; almost all physicists working in the field should be able to attest to this. This is why most physicists and applied mathematicians tend to be relatively bad at axiomatic reasoning, while formal mathematicians, logicians and philosophers excel at this type of reasoning being simultaneously relatively bad at regular 'physical' reasoning.
 
  • Like
Likes Fra and Buzz Bloom
  • #62
Auto-Didact said:
I'm very glad to announce that this is the source of our disagreement. Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
To structure the response to your post in general, could I just know what you mean by saying Speekens has misunderstood what ##\psi##-ontic means? That definition is the one used in Quantum Foundations, so I want to understand how it is wrong. It would really surprise me as Spekkens invented the term.

Currently I'm trying to understand the rest of your post, you are saying the framework has limits/doesn't apply in general, am I right? Isn't that what I said, i.e. the framework has specific rules and you can ignore the no-go theorems if you reject the rules?

Auto-Didact said:
I'm just presuming that the anti-'fine-tuning stance' probably originated there and then spilled over from physicists who perhaps began working in hep-th (or were influenced there during training) and then ended up working in quantum foundations.
Unlikely, there aren't many. Plus it isn't anti-fine tuning it's just saying it is present. Many simply accept the fine-tuning.
 
Last edited:
  • #63
DarMM said:
A minor technical point, I would say "this is what is used in the framework", i.e. the Ontic models framework in general and all proofs that take place in it, not just Bell's theorem.
...
Indeed this is what is used, as the theorem attempts to constrain Realist theories. If you reject ##\Lambda##, then you have a non-Realist interpretation, which the framework isn't about. Indeed you could see the no-go results as suggesting moving to a non-Realist interpretation/model, it's not meant to also argue against them.
...
I think you might be missing the point of the framework as I discussed above. It's not to constrain all interpretations, just Realist ones, so the ansatz is justified as an encoding of the the framework's targets, i.e. Realist theories.

I see, then our disagreements here are mainly a matter of definition of "ontology for QM". My reaction was against that somewhere earlier in the thread I got the impression that bells theorem was supposed to be an sweeping argument against the validity of the explanatory value of understanding particles as self-organised systems in a chaotic setting. I think that is wrong and misguided, and risks dumbing down idea which may turn out be interesting. I was assuming we talked about ontological understanding of QM in general, not the narrowed down version of realist models. IMO ontology is not quite the same as classical realism?

/Fredrik
 
  • Like
Likes DarMM
  • #64
DarMM said:
To structure the response to your post in general, could I just know what you mean by saying Speekens has misunderstood what ##\psi##-ontic means? That definition is the one used in Quantum Foundations, so I want to understand how it is wrong. It would really surprise me as Spekkens invented the term.
Psi-ontic simply means treating the wavefunction as an ontological object, i.e. as a matter of existence. This is directly opposed to psi-epistemic which simply means treating the wavefunction as an epistemological object, i.e. as a matter of knowledge.

Spekkens may have popularized the usage of these terms in foundations based on his specific operationalization, but he certainly did not invent these terms (perhaps only the shorthand 'psi-ontic/epistemic' opposed to 'psi is ontological/epistemological').

These terms have been used in the foundations literature since Bohr, Einstein, Heisenberg et al. and they have of course already been standard terminology in philosophy (metaphysics) for millenia.
DarMM said:
Currently I'm trying to understand the rest of your post, you are saying the framework has limits/doesn't apply in general, am I right? Isn't that what I said, i.e. the framework has specific rules and you can ignore the no-go theorems if you reject the rules?
Yes, basically. I apologize for my somewhat digressive form of writing; I'm speaking not just to you, but to everyone who may be reading (including future readers!).
 
  • Like
Likes DarMM
  • #65
Auto-Didact said:
Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
I wouldn't want to be so harsh as to claim Spekkens "misunderstood" anything but I get your point, and incidently the simplification is also power. After all, its hard to make computations on concepts until there is a mathematical model for it.

This reminds me also on one of Smolins notes on Wigners query about the unreasonable effectiveness of mathematics.

"The view I will propose answers Wigner’s query about the ”unreasonable effectiveness of mathematics in physics” by showing that the role of mathematics within physics is reasonable, because it is limited."
-- L.Smolin, https://arxiv.org/pdf/1506.03733.pdf

This is in fact related to how i see how deductive logic is emergent from general inference such as induction and abduction, by compressed sensing. To be precise, you sometimes need to take the risk of beeing wrong, and not account for all the various subtle concerns that are under the FAPP radar.

/Fredrik
 
  • Like
Likes Auto-Didact
  • #66
Auto-Didact said:
Psi-ontic simply means treating the wavefunction as an ontological object, i.e. as a matter of existence.
And what aspect of this does the ontological framework miss out on/misunderstand?
 
  • #67
Fra said:
I was assuming we talked about ontological understanding of QM in general, not the narrowed down version of realist models
The no-go theorems refer to the latter. Self-organising chaotic models not relating to an underlying ontic space would not be covered.

Fra said:
IMO ontology is not quite the same as classical realism?
It's certainly not, but it is important to show that classical realism is heavily constrained by QM as many will reach for it, hence the ontological models framework.
 
  • #68
DarMM said:
And what aspect of this does the ontological framework miss out on/misunderstand?
Ontology being fully equivalent and therefore reducible to a state space treatment (or any other simplified/highly idealized mathematical treatment for that matter), whether that be for the ontology of the wavefunction of QM or for ontology of some (theoretical) object in general.

To say that having an ontology of psi is equivalent to a state space treatment, is to say that no other possible mathematical formulation of an ontology of psi is possible which isn't equivalent to the state space one.

This is a hypothesis which is easily falsified, namely by constructing another mathematical formulation based on a completely different conceptual basis which can also capture the ontology of psi.

Perhaps this would end up being completely equivalent to the state space formulation, but that would have to be demonstrated. Moreover, there already seem to be models which treat psi as ontological and which aren't equivalent to the state space formulation, implying the hypothesis has already been falsified.

To give another example by analogy: Newtonian mechanics clearly isn't the only possible formulation of mechanics despite what hundreds/thousands of physicists and philosophers working in the foundations of physics argued for centuries and regardless of the fact that reformulations such as the Hamiltonian/Lagrangian ones were fully equivalent to it while sounding conceptually completely different.
 
Last edited:
  • #69
Auto-Didact said:
To say that having an ontology of psi is equivalent to a state space treatment, is to say that no other possible mathematical formulation of an ontology of psi is possible which isn't equivalent to the state space one.
##\psi## is a solution to the Schrodinger equation and it has a state space, Hilbert space, what would it mean for a theory in which ##\psi## is a real object for it not to have a state space formulation?

Auto-Didact said:
Moreover, there already seem to be models which treat psi as ontological and which aren't equivalent to the state space formulation, implying the hypothesis has already been falsified.
This might help, can you give an example?
 
  • #70
DarMM said:
##\psi## is a solution to the Schrodinger equation and it has a state space, Hilbert space, what would it mean for a theory in which ##\psi## is a real object for it not to have a state space formulation?
Of course, I am not saying that it doesn't have a state space formulation, but rather that such a formulation need not capture all the intricacies of a possible more completed version of QM or theory beyond QM wherein ##\psi## is taken to be ontological. To avoid misunderstanding: by a 'state space formulation of the ontology of ##\psi##' I am referring very particularly to this:
DarMM said:
Psi-Ontic models are simply the case where the state space of the theory ##\Lambda## takes the form ##\Lambda = \mathcal{H} \times \mathcal{A}## with ##\mathcal{H}## the quantum Hilbert Space and ##\mathcal{A}## some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.
This ##\Lambda## seems to be a very particular kind of fiber bundle or symplectic manifold. You are calling it a state space, but can you elaborate on what kind of mathematical space this is exactly?
DarMM said:
This might help, can you give an example?
Some (if not all) wavefunction collapse schemes, whether or not they are supplanted with a dynamical model characterizing the collapse mechanism. The proper combination of such a scheme and a model can produce a theory beyond QM wherein ##\psi## is ontological.
 
  • #72
Auto-Didact said:
This ##\Lambda## seems to be a very particular kind of fiber bundle or symplectic manifold. You are calling it a state space, but can you elaborate on what kind of mathematical space this is exactly?
A measurable set, to be honest it needn't even be a space (i.e. have a topology). In the ontic case it has the additional constraint that it can be decomposed as a Cartesian product with one element the Hilbert space. It doesn't have to be a fiber bundle or a symplectic manifold.
Things like GRW are covered within the general ontological models framework, unless you make other assumptions that exclude them (which some theorems do, but not all).

The ##\psi##-ontic model would have to break out of the framework to escape many results, by breaking some of the axioms, so called "exotic" ontic models. However even these (e.g. Many-Worlds) still have ##\Lambda = \mathcal{H} \times \mathcal{A}##. The only way you would escape this is if there were additional variables beyond the wavefunction that had a non-measurable state space.
 
  • #73
As we keeping thinking differently, and this is a repeating theme in various disguises, I think it could be worth noting that IMO there is a difference between uncertainty in the general case and what we call ignorance.

Both can be treated within a probability framework, but their origin and logical properties when we start to talk about conjunctions etc, are very different.

Uncertainty that are originating from non-commutative information.
- This the the typical HUP uncertainty relation between conjugate variables. This uncertainty is not be interpreted as "ignorance", it is rather a structural constraint, and there is no "cure" for this" but adding "missing informaiton".
- One can OTOH, ask WHY nature seems to "complicate matters" by encoding conjunctions of non-commutative information? My own explanatory model is that its simply a evolutionary selected compressed sensing. Ie this "more complicated logic" is more efficient. [Is to be proven in context though]

In a way, i reject the "ontic space model" on generic grounds, simply because its for the above reason doomed to fail. I would even think its irrational to try to model general uncertainty by means of ignorance in this sense. This can be expectated from the logic i think, even without bothering with theorems. That is, give or take nonlocality or superdeterminism etc. I find it pathological to start with.

Uncertainty that are originating from incomplete information,
Even though this is more closely to "classical statstical" uncertainty, there is another twist here that makes things very interesting. Its one think to think about "ignorance" in the sense of "could have known" but dont't beacuse i was not informed, or i "lost the information" etc.

But there can also be (this is a conjecture of mine) physical constrainst in the observers structure, that fundamentally limints the AMOUNT of information it can encode. This is actually another "non-classical" uncertainty in the sense that when considering models where the action DEPENDS on summing over probabilities, then this actually changes the game! Because the "path integral" or what version we use is getting a self-imposed regularization, that is associated with the observers, say mass or informartion capacity (details here are an open question). This latter "uncertainty" is the reason also for the significant of compression sensing.

So I would say there are at least THREE types of uncertaint here, and ALL three are IMO at play in a general model.

This kind of model is what i am personally working on, and this is obviously fundamental as it not only reconstructs spacetime, it reconstructs the mathematical inference logic for physics. It aims to expalin the emergence of quantum logic, and to understand also how it incorporates gravity. But it does NOT aim todo so in terms of a simple ontic model, that uses only one of the THREE type of uncertainty. This is why i keep calling it general inference, because its a generalisation with goes beyond both kolmogorov probability and quantum logic.

/Fredrik
 
  • #74
DarMM said:
A measurable set, to be honest it needn't even be a space (i.e. have a topology). In the ontic case it has the additional constraint that it can be decomposed as a Cartesian product with one element the Hilbert space. It doesn't have to be a fiber bundle or a symplectic manifold.
Doesn't sound conceptually right as a foundation at all. This is because all state spaces in the canon of physics - definitely all I have ever seen in physics and applied mathematics w.r.t. other sciences - are always most naturally formulated as symplectic manifolds, fibre bundles or some other analogous highly structured mathematical object.

Being, as you say, in essence free from a natural conceptual formulation in terms of mathematics as some space would make this a very atypical foundational object outside of pure mathematics proper - i.e. an artificially constructed (a posteriori) mathematicized object completely based in axiomatics. This means the mathematical identification or construction of the object was purely a matter of being defined into existence by axiomatic reasoning instead of naturally discovered - and therefore almost certainly outside of physics proper.

Such 'artificial mathematical objects' are rife outside the exact sciences, e.g. defined operationalizations of phenomenon which only strenously bare any resemblences to the phenomena they are meant to reflect on the real world. Usually such objects are based on an extrapolation of some (statistical) data, i.e. a (premature) operationalization of a concept into a technical mathematical definition.

It seems to me that the exact same thing is going on here, since it is as you say a measurable set i.e. an axiomatic probability-esque object. Practically all mathematical objects with such properties are (or tend to be) purely epistemological in nature, directly implying that ##\psi## is actually being treated epistemologically instead of ontologically, the epistemic nature carefully hidden behind axiomatic gymnastics.
DarMM said:
The only way you would escape this is if there were additional variables beyond the wavefunction that had a non-measurable state space.
I see absolutely no reason to exclude such spaces a priori, especially given Manasson's model's mathematical basis in non-linear dynamical systems. One only need recall that practically all objects from non-linear dynamical systems were just a few decades ago universally regarded by mathematicians as nothing more than 'pathological' mathematical notions which were meant to be excluded by hand.
 
  • #75
Auto-Didact said:
Doesn't sound conceptually right as a foundation at all. This is because all state spaces in the canon of physics - definitely all I have ever seen in physics and applied mathematics w.r.t. other sciences - are always most naturally formulated as symplectic manifolds, fibre bundles or some other analogous highly structured mathematical object.
It includes those as special cases, so anything proven for the general ##\Lambda## holds for them.

Auto-Didact said:
It seems to me that the exact same thing is going on here, since it is as you say a measurable set i.e. an axiomatic probability-esque object. Practically all mathematical objects with such properties are (or tend to be) purely epistemological in nature, directly implying that ##\psi## is actually being treated epistemologically instead of ontologically, the epistemic nature carefully hidden behind axiomatic gymnastics.
I don't follow this to be honest. It only being required to be a measurable set is simply to make it very general. It also includes symplectic manifolds, fiber bundles, etc as special cases. I don't really see how that makes it secretly epistemic, otherwise anything supporting integration is secretly epistemic.

Auto-Didact said:
I see absolutely no reason to exclude such spaces a priori, especially given Manasson's model's mathematical basis in non-linear dynamical systems. One only need recall that practically all objects from non-linear dynamical systems were just a few decades ago universally regarded by mathematicians as nothing more than 'pathological' mathematical notions which were meant to be excluded by hand.
There's no a priori reason to exclude them and I think this is where the point is being missed.

I think both you and @Fra are taking the ontological foundations framework as some kind of claim of a "final argument" or something. For example thinking they are "rejecting the possibility of non-measurable spaces". Nobody is doing that, it's simply a very general framework so one can analyse a broad class of theories, nobody is arguing that the truth "must" lie within the framework. It's just that we have no-go results for anything that does, which is useful to have. It has allowed us to see what list of things you have to reject in order to get a decent theory that explains quantum phenomena and what set of theories are either hopeless or will end up with unnatural fine-tunings.

As I said to @Fra :
DarMM said:
It would be like somebody setting out to see what constraints apply to discrete models of some theory and then objecting to their use of ##\mathbb{Z}^{d}##

And to come back to a point you made earlier:
Auto-Didact said:
In my view, you seem to be a) frequently confusing (psi-)ontology with (psi-)epistemology, b) interpreting certain extremely subtle arguments at face value and therefore incorrectly (e.g. Valentini's argument for BM, on the face of it goes against accepted wisdom in contemporary physics, this in no way invalidates his argument, his argument is a logically valid argument), c) interpreting no-go theorems possibly based on shaky assumptions as actual definitive demonstrations and d) attributing concepts deemed impossible within contemporary physics (e.g. superdeterminism, superluminality, retrocausation) as effects of fine-tuning based arguments in the form of c).
(a) I don't think so, I know what the terms mean, they're fairly simple
(b) Where am I doing this?
(c) I am explicitly not doing this, as I know the ontological framework axioms and their exact limits, so I know when they don't hold.
(d) This is backwards, some of the theorems show they have fine-tunings, nothing says they are the result of fine-tunings.

Auto-Didact said:
This is made clearer when you automatically generalize the validity of proofs using concepts defined in a certain context as if the proof covers all contexts - seemingly purely because it is a theorem - something you have no doubt learned to trust from your experience of using/learning mathematics.
Again where have I done this? I know the ontological framework axioms, so I know where they apply and don't.

Can either of you two state what the ontological framework axioms are in your understanding, considering you have been arguing against it?
 
Last edited:
  • #76
Fra said:
In a way, i reject the "ontic space model" on generic grounds, simply because its for the above reason doomed to fail. I would even think its irrational to try to model general uncertainty by means of ignorance in this sense. This can be expectated from the logic i think, even without bothering with theorems. That is, give or take nonlocality or superdeterminism etc. I find it pathological to start with.
Well the ontological models framework show certain such models have pathologies in a rigorous way, one can easily say something is "expected" according to your personal intuition, quite another to actually prove it.
 
  • Like
Likes Fra
  • #77
ftr said:
All indications are that nonlocality is the first reason for QM, local effects are byproducts in a similar fashion to how "virtual particles" paradigm works.
I don't understand the link with virtual particles.
 
  • #78
DarMM said:
It includes those as special cases, so anything proven for the general ##\Lambda## holds for them.
I understand that that is the intention, but it is actually a quite subtle point I am trying to make. I understand that the goal is to make scientific progress, my point is that axiomatics isn't a proper basis for reasoning in (the foundations of) physics: research in foundations needs to lead to experimentally falsifiable statements, not axiomatic theorems. There is simply too much uncertainty regarding the actual range of validity of the known laws of physics, way too much uncertainty in order that one can make definitive axiomatic statements (theorems) without deluding oneself.
DarMM said:
I don't follow this to be honest. It only being required to be a measurable set is simply to make it very general. It also includes symplectic manifolds, fiber bundles, etc as special cases. I don't really see how that makes it secretly epistemic, otherwise anything supporting integration is secretly epistemic.
The problem with reification of some mathematics as an actual physical concept, especially w.r.t. an overt generalization such as the measurable set - a mathematical construction, which just like ZFC and other standard foundations of mathematics, was purely constructed a posteriori in order to do axiomatics - is that the underlying physical concept originally under research can become divorced from its conceptual properties and so lost during this abstraction process; I am arguing that the ontology vanishes as well leaving one with nothing but epistemology.
DarMM said:
There's no a priori reason to exclude them and I think this is where the point is being missed.
I understand that and I am glad that you realize that, however I'm not so sure other physicists who read and cite foundations literature do realize this as well. In my experience they tend to take statements - especially theorems - at face value as either non-empirical evidence or definitive mathematical proof; this goes for physicists at all levels, from (under)grad students up to professors.
DarMM said:
I think both you and @Fra are taking the ontological foundations framework as some kind of claim of a "final argument" or something. For example thinking they are "rejecting the possibility of non-measurable spaces". Nobody is doing that, it's simply a very general framework so one can analyse a broad class of theories, nobody is arguing that the truth "must" lie within the framework. It's just that we have no-go results for anything that does, which is useful to have. It has allowed us to see what list of things you have to reject in order to get a decent theory that explains quantum phenomena and what set of theories are either hopeless or will end up with unnatural fine-tunings.

As I said to @Fra :
The goal of foundations is to provide exactly such definitive statements; the problem is that axiomatic statements such as the no-go theorems, and in fact axiomatic reasoning itself, has historically never belonged to the toolbox of foundations of physics, but instead to the toolbox of mathematical physics. It is paramount to understand axiomatics, being essentially a form of deductive logic, cannot go beyond what is defined. As Poincaré said:
Poincaré said:
We have confined ourselves to bringing together one or other of two purely conventional definitions, and we have verified their identity; nothing new has been learned. Verification differs from proof precisely because it is analytical, and because it leads to nothing. It leads to nothing because the conclusion is nothing but the premisses translated into another language. A real proof, on the other hand, is fruitful, because the conclusion is in a sense more general than the premisses.
Historically, the goal of foundations of physics has always been to challenge accepted concepts which are deemed fundamental, by looking for mathematical reformulations which enable a natural synthesis (NB: not natural in the sense of naturalness but in the classical sense, i.e. 'spontaneous' or the opposite of artificial) between conflicting concepts often directly paired with novel experimental predictions.

Once some theory becomes too entrenched or embedded, dogmatically taken as necessarily (absolutely) true above other theories, things start to go awry. As Poincaré brilliantly pointed out a century ago - and echoed by Feynman decades later - axiomatic reasoning, being purely deductive, cannot offer a resolution to foundational issues in physics, because physical theory is incomplete: only hypotheses checkable by experiment can go beyond what is already known.

Having no-go results of uncertain validity is therefore actually of very questionable utility in the field of foundations, especially given the danger of premature closure and therefore promotion of cognitive biases among theoreticians. The fact of the matter is that foundations is a small branch in the practice of physics; everyone has benefit to avoid it becoming little more than an echo chamber, which sadly is definitely a possibility as we have seen in the practice of physics over the last century.
DarMM said:
And to come back to a point you made earlier:

(a) I don't think so, I know what the terms mean, they're fairly simple
(b) Where am I doing this?
(c) I am explicitly not doing this, as I know the ontological framework axioms and their exact limits, so I know when they don't hold.
(d) This is backwards, some of the theorems show they have fine-tunings, nothing says they are the result of fine-tunings.
Apart from a) which I have elaborated further upon, including in this very post, I agree you aren't doing b), c) and d). The problem is those less familiar with foundations of physics will almost certainly do b), c) and d) - especially if (self proclaimed) experts openly do a) as they in fact regularly do seem to do since foundations started adopting axiomatics starting with von Neumann.
DarMM said:
Again where have I done this? I know the ontological framework axioms, so I know where they apply and don't.

Can either of you two state what the ontological framework axioms are in your understanding, considering you have been arguing against it?
I have stated it already in post #64, and moreover argued at length why I believe the definition as given is a mischaracterization i.e. an incorrect operationalization of the concept that ##\psi## is ontological into a technical definition. Even stronger, I believe the strategy for some rigorous definition based on axioms at this stage is itself a misguided quest; we need preliminary mathematical models, not highly rigorous axiomatics.

W.r.t. QM foundations, I believe that the immediate focus should be theoretical i.e. a qualitative reformulation until conflicting concepts become consistent, leading to a resolution in which this new conceptual formulation can be restated using existing (but possibly non-standard) concrete mathematics, leading to experimental predictions; it is only after experimental verification that the mathematical physicists should try to find rigorous mathematical definitions.

Incidentally, Lucien Hardy essentially argues for this strategy for solving the problems in QM foundations as well, as seen in this thread, see my post there as well.
 
Last edited:
  • #79
DarMM said:
Well the ontological models framework show certain such models have pathologies in a rigorous way, one can easily say something is "expected" according to your personal intuition, quite another to actually prove it.
Agreed. But I guess my point was that - even in despite proofs, it does not seem to prevent people keep to looking for loopholes, and in this perspective i argue that there is an easier way to argue with yourself against using the type of uncertainty implicitiy in "ignorance" as i defined it above, as universal explanations.

/Fredrik
 
  • #80
I always thought self-similarity as seen in fractal sets was a way to accommodate Bell's non-local correlations - in an "all structure is emergent structure" approach.
Any two measurements must differ at minimum by some index on an operation (maybe an evolutionary one) that walked them far enough apart to make them stereoscopically observable. Maybe Bell and the rest of the SM (i.e. all conservation laws including super-selection) are just pointing out degrees of similarity in emergent (dissipative) structure.

The problem with fractal sets is that they are not well behaved spaces as manifolds go, to put it mildly.
It's hard to imagine any notion of calculus and algebra on sets that can have have all manner of wild mechanisms of smooth-roughness, continuity (with dis-continuity included) and connections that are truly not-local.

Maybe someday Ai will be able to help with this, discovering brute force what derivative and integral operators do in a big zoo of multi-fractal generated spaces.
 
  • #81
Jimster41 said:
I always thought self-similarity as seen in fractal sets was a way to accommodate Bell's non-local correlations - in an "all structure is emergent structure" approach.
Any two measurements must differ at minimum by some index on an operation (maybe an evolutionary one) that walked them far enough apart to make them stereoscopically observable. Maybe Bell and the rest of the SM (i.e. all conservation laws including super-selection) are just pointing out degrees of similarity in emergent (dissipative) structure.

The problem with fractal sets is that they are not well behaved spaces as manifolds go, to put it mildly.
It's hard to imagine any notion of calculus and algebra on sets that can have have all manner of wild mechanisms of smooth-roughness, continuity (with dis-continuity included) and connections that are truly not-local.

Maybe someday Ai will be able to help with this, discovering brute force what derivative and integral operators do in a big zoo of multi-fractal generated spaces.
Optimistically, multifractal analysis might already be a sufficient tool, but that is just grasping in the dark.

If I remember correct though, Nottale has a theory which sounds somewhat similar to the "all structure is emergent" idea called scale relativity (or something like that).

Moreover, I would naively like to presume that maybe something simple but alternative like fractional calculus or multiplicative calculus may perhaps be useful alternative forms of calculus which might be enlightening w.r.t. naturally capturing or identifying the correct physical quantities or equations involved in such a framework. Else, perhaps more advanced algebraic-geometric or holomorphic notions would probably be necessary.
 
  • Like
Likes Jimster41
  • #82
The question of what mathematics that will be required is indeed an interesting question. It is a paramount question also in my perspective, as one of they key ingredients in the quest for a physical inference theory, that is to generalise probability, is to characterise a MEASURE, that is intrisically constructable by a physical observer.

An inference as in reasoning with uncertainty needs a measure to quantify the confidence in certain things, as it conceptual boils down to how to COUNT evidence, in a rational way. One problem in most axiomatic constructions of probabilitiy theory is that one introduces uncountable numbers without justification. Does an arbitrary observer have acccess to infinite bit counters? The real justification is limits, but if you consider physical processes to be like computations, these limits are never actually reached, and pathologies in the theories arise when you assume that limits are manifested in observer states. What happens is that you loose tracking of limit procedures. I think a careful compliance to intrinsic measures will make convergences manifest. Divergences in theories is a symptom of abusing of mathematics, mixing up "mathematical possibilities" with actual possibilities in the inference and placing bets. Even though you "can" fix it, it shouldn't have to arise in the first place.

So what I am saying is that that I think smooth mathematics might approxiamate reality, not the other way around. Reconstructing quantum theory imo unavoidably goes hand in hand with reconstructing the measure mathematics for counting, and "summing". Ie. what ends up calculcus in the continuum limit, but this are more complicated here, beucase the actualy LIMIT may not be physical at all! My hunch is that its definitely not so.

/Fredrik
 
  • Like
Likes Jimster41
  • #83
Auto-Didact said:
I understand that that is the intention, but it is actually a quite subtle point I am trying to make. I understand that the goal is to make scientific progress, my point is that axiomatics isn't a proper basis for reasoning in (the foundations of) physics: research in foundations needs to lead to experimentally falsifiable statements, not axiomatic theorems. There is simply too much uncertainty regarding the actual range of validity of the known laws of physics, way too much uncertainty in order that one can make definitive axiomatic statements (theorems) without deluding oneself.
Well it eliminates theories that can't work, since many people thought to suggest and build models that align with the theorem, I think it's useful for knowing what's not going on in QM. Since the eliminated models seem to be the most conventional ones and the class eliminated quite broad, I think it's useful, it just shows how weird the explanation will be. Even the fact that it has made anti-realist explanations more plausible is interesting enough on its own.

Auto-Didact said:
The problem with reification of some mathematics as an actual physical concept, especially w.r.t. an overt generalization such as the measurable set - a mathematical construction, which just like ZFC and other standard foundations of mathematics, was purely constructed a posteriori in order to do axiomatics - is that the underlying physical concept originally under research can become divorced from its conceptual properties and so lost during this abstraction process; I am arguing that the ontology vanishes as well leaving one with nothing but epistemology.
Doesn't this just apply to any kind of mathematical research though. I still don't see why something "supporting integration" is epistemological, it depends on how you are viewing it in your theory. You might consider it an underlying state space of ontic states, just that the state space can be integrated over, but it doesn't make it an epistemic object, otherwise the manifold in GR is an epistemic object.

Auto-Didact said:
The goal of foundations is to provide exactly such definitive statements...Historically, the goal of foundations of physics has always been to challenge accepted concepts
The ontological models framework does challenge accepted concepts, because it tells you what won't work. So it eliminates more naive ideas people had. I don't think it is the goal of the framework to provide definitive statements about what is actually going on in QM, just to help eliminate various lines of reasoning.

Auto-Didact said:
I have stated it already in post #64, and moreover argued at length why I believe the definition as given is a mischaracterization i.e. an incorrect operationalization of the concept that ##\psi## is ontological into a technical definition. Even stronger, I believe the strategy for some rigorous definition based on axioms at this stage is itself a misguided quest; we need preliminary mathematical models, not highly rigorous axiomatics.
I genuinely still don't understand what's actually wrong with using axiomatic reasoning to eliminate various mathematical models. I also still don't really understand what is wrong with defining ##\psi##-ontology to be having a state space like ##\Lambda = \mathcal{H} \times \mathcal{A}##, ##\mathcal{H}## being part of the state space seems to be necessary for ##\psi##-ontology as ##\mathcal{H}## is simply the space of all ##\psi##s. Can you explain what ##\psi## being ontic without ##\mathcal{H}## being involved means? I think this would really help me.

I understand if you simply see this sort of axiomatic investigation as not the optimal strategy or unlikely to help with progress. However at times you seem to suggesting their conclusions are also incorrect, or even some of the definitions, this I don't really understand.
 
  • #84
Auto-Didact said:
The author convincingly demonstrates that practically everything known about particle physics, including the SM itself, can be derived from first principles by treating the electron as an evolved self-organized open system in the context of dissipative nonlinear systems.

So, setting the important Bell theorem criticism aside, I can't help but view this paper as furious hand waving followed by an isolated guess. While the ideas seem intriguing (well worth a paper) what is the predictive power of the idea? SOSs need some level of underlying "system" to organize. Without QM what is that system even in principle?
 
  • Like
Likes akvadrako
  • #85
Paul Colby said:
So, setting the important Bell theorem criticism aside, I can't help but view this paper as furious hand waving followed by an isolated guess. While the ideas seem intriguing (well worth a paper) what is the predictive power of the idea? SOSs need some level of underlying "system" to organize. Without QM what is that system even in principle?
At this stage, not immediately having PDE's or other equations isn't an issue whatsoever: one of the most successful scientific theories ever, evolution through natural selection, was not formulated using any mathematics at all, yet the predictions were very clear once conceptually grasped, but I digress.

To answer your question: that system is still particles, just not particles resulting from QM but instead from some deeper viewpoint; this is nothing new since viewpoints of particulate matter far precede QM (by millenia).

Manasson's deeper perspective predicts among many other things, as actual physical phenomenon both:
1) a dynamical mechanism underlying renormalization capable of explaining all possible bare and dressed values of particles
2) the quantized nature of objects in QM as a direct result of the underlying dynamics of particles themselves, instead of the quantized nature being a theoretically unexplained postulate.

Essentially, according to Manasson, there is a shift of particle physics foundations from QT to dynamical systems theory, with the mathematics and qualitative nature of QT resulting directly from the properties of a very special kind of dynamical system.
 
  • Like
Likes Jimster41
  • #86
Auto-Didact said:
To answer your question: that system is still particles, just not particles resulting from QM but instead from some deeper viewpoint; this is nothing new since viewpoints of particulate matter far precede QM (by millenia).

This isn't an answer IMO, nor does it resemble the state space arguments of section II of the paper in a way that I understand. The argument first postulates a "state space", ##\psi_k##, which is essentially undefined and a transformation, ##F(\psi_k)##, which is assumed to have frequency doubling property which people familiar with SOS likely understand. The author then makes a general argument about the behavior of such systems. All this is fine.

However, clearly one cannot simply utter the word particles and get anywhere. One must also include some form of nonlinear interaction to even have frequency doubling. To extract any information, one must add the existence of electric charge and presumably a whole zoo of other quantum numbers like charm and so on. In the current scheme would these simply be disjoint facts, one parameter per prediction?
 
Last edited:
  • #87
Paul Colby said:
This isn't an answer IMO, nor does it resemble the state space arguments of section II of the paper in a way that I understand. The argument first postulates a "state space", ##\psi_k##, which is essentially undefined and a transformation, ##F(\psi_k)##, which is assumed to have frequency doubling property which people familiar with SOS likely understand. The author then makes a general argument about the behavior of such systems. All this is fine.

However, clearly one cannot simply utter the word particles and get anywhere. One must also include some form of nonlinear interaction to even have frequency doubling. To extract any information, one must add the existence of electric charge and presumably a whole zoo of other quantum numbers like charm and so on. In the current scheme would these simply be disjoint facts, one parameter per prediction?
Actually, section II starts off by considering the evolution into stable existence an electron, i.e. a particle, in the following manner:
1) A negatively charged fluctuation of the vacuum occurs due to some perturbation.
2) The presence of the fluctuation causes a polarization of the vacuum.
3) This leads to positive and negative feedback loops in the interaction between vacuum and fluctuation, which together form the system.
4) Depending on the energy of the original perturbation, there are only two possible outcomes for this system: settling into thermodynamic equilibrium or bifurcation into a dynamically stable state.
5) Hypothesis: the electron is such a dynamically stable state.

In the above description there is only one characteristic relevant parameter for this system, namely charge (##q##). This can be reasoned as follows:

6) The described dynamics occur in a manifestly open system.
7) The stable states of this system are fractals, i.e. strange attractors, in the state space.
8) Therefore the full dynamics of the system is described by a nonlinear vector field ##\vec \psi## in an infinite dimensional state space.
9) Practically, this can be reduced to a low dimensional state space using a statistical mechanics or a hydrodynamics treatment.
10) This results in the state space of the system being described by just a few extensive variables, most importantly ##q##.

A simple dimensional analysis argument gives us a relationship between the action (##J##) and ##q## i.e. ##J=(\sqrt {\frac {\mu_0} {\epsilon_0}})q^2##. Carrying on:

11) Then take a Poincaré section through the attractor in the state space to generate the Poincaré map.
12) Parametrize this map using ##q## or ##J## and we have ourselves the needed recurrence map ##\psi_{J} = F(\psi_{J-1})##.
13) Given that the dynamics of this system is described by a strange attractor in state space this automatically ensures that the above map is a Feigenbaum map, displaying period doubling.
14) A period doubling is a phase transition of the attractor leading to a double loop attractor (a la Rössler).
15) The topology of this double loop attractor is the Möbius strip, with vectors inside this strip being spinors, i.e. this is also a first principles derivation of spinor theory.

A purely qualitative treatment of attractor characterization leading directly to conclusions is standard practice in dynamical systems research and none of the above taken steps seem to be particularly - either mathematically or physically - controversial.
 
  • #88
Auto-Didact said:
A purely qualitative treatment of attractor characterization leading directly to conclusions is standard practice in dynamical systems research and none of the above taken steps seem to be particularly - either mathematically or physically - controversial.

So, it should be fairly straight forward to reproduce the observed energy levels of a hydrogen atom. Please include hyperfine splitting and the Lamb shift in the analysis. How would such a calculation proceed?
 
  • #89
DarMM said:
Well it eliminates theories that can't work, since many people thought to suggest and build models that align with the theorem, I think it's useful for knowing what's not going on in QM. Since the eliminated models seem to be the most conventional ones and the class eliminated quite broad, I think it's useful, it just shows how weird the explanation will be. Even the fact that it has made anti-realist explanations more plausible is interesting enough on its own.
Can't work given certain assumptions, including the full validity of axioms of QM beyond what has been experimentally demonstrated; if QM is shown to be a limiting theory, many utilizations of the theorems to test hypotheses will be rendered invalid.
DarMM said:
Doesn't this just apply to any kind of mathematical research though. I still don't see why something "supporting integration" is epistemological, it depends on how you are viewing it in your theory. You might consider it an underlying state space of ontic states, just that the state space can be integrated over, but it doesn't make it an epistemic object, otherwise the manifold in GR is an epistemic object.
If the only relevant property is that 'it supports integration', then you have removed all the physics and are left with just mathematics. 'It supports integration' is equally empty as the statement 'numbers are used in physics'.

If you would consider that the manifold in GR is just a measurable set, not necessarily pseudo-Riemannian nor differentiable, you would actually lose all the physics of GR including diffeomorphism invariance: it would transform the manifold into exactly an epistemological object! Both statistics and information geometry have such manifolds which are purely epistemic objects. The point is that you would not be doing physics anymore but secretly slipped into doing mathematics.
DarMM said:
The ontological models framework does challenge accepted concepts, because it tells you what won't work. So it eliminates more naive ideas people had. I don't think it is the goal of the framework to provide definitive statements about what is actually going on in QM, just to help eliminate various lines of reasoning.
It eliminates lines of reasoning yes, it however may introduce lines of reasoning falsely as described above. Every QM foundations paper using or suggesting that no-go theorems can effectively be used as statistical tests to make conclusive statements about different physical hypotheses need to correct for the non-ideal nature of the test, i.e. report the accuracy of this test; this is an empirical matter not a logical or mathematical one.
DarMM said:
I genuinely still don't understand what's actually wrong with using axiomatic reasoning to eliminate various mathematical models. I also still don't really understand what is wrong with defining ##\psi##-ontology to be having a state space like ##\Lambda = \mathcal{H} \times \mathcal{A}##, ##\mathcal{H}## being part of the state space seems to be necessary for ##\psi##-ontology as ##\mathcal{H}## is simply the space of all ##\psi##s. Can you explain what ##\psi## being ontic without ##\mathcal{H}## being involved means? I think this would really help me.
I'm not saying ##\mathcal{H}## shouldn't be involved, I am saying in terms of physics it isn't the most important mathematical quantity we should be thinking about.
DarMM said:
I understand if you simply see this sort of axiomatic investigation as not the optimal strategy or unlikely to help with progress. However at times you seem to suggesting their conclusions are also incorrect, or even some of the definitions, this I don't really understand.
Yes, there is a blatant use of the theorems as selection criteria for empirical hypotheses, i.e. as a statistical selection tool for novel hypotheses. The use of axiomatics in this manner has no scientific basis and is unheard of in the practice of physics, or worse, known to be an abuse of rationality in empirical matters.

The only valid use of such reasoning is selection of hypotheses based on confirming to unquestionable physical laws, such as conservation laws, which have been demonstrated to be empirically valid in an enormous range independent of specific theories; the axioms of QM (and QM itself despite all that it has done) have simply not met this criteria yet.
 
  • #90
Auto-Didact said:
The only valid use of such reasoning is selection of hypotheses based on confirming to unquestionable physical laws, such as conservation laws, which have been demonstrated to be empirically valid

An evolutionary model needs to allow for both variation and stability in balance. If there is too much flexibility we loose stability and convergence in evolution. A natural way to do this is that hypothesis generation naturally rates the possibilities worth testing. In this perspective one can imagine that constraining hypothesis space is rational. Rationality here however does not imply that its the right choice. After all even in nature, evolved successful spieces, sometimes simply die out, and it does not mean that they were irrational. They placed the bet optimally and they died, and that's how the game goes.

What I am trying to say here is that the situation is paradoxal. This is both a key and a curse. The problem is when human scientists only sees it from one side.

And IMO a possible resolution to the paradoxal situation is to see that the rationality of the constraints of hypothesis space is observer dependent. If you absorb this, there is a possible exploit to make here. For a human scientist to constrain his own thinking is one thing, and bor an electron to constrain its own map of its environment is another. In the former case it has to do with beeing aware of our own logic and its limitations, and in the latter case its an opportunity for humans to for example understand the action of subatomic systems.

/Fredrik
 
  • #91
Auto-Didact said:
Can't work given certain assumptions
Of course, as I have said, the theorems have assumptions, that's a given.

Auto-Didact said:
including the full validity of axioms of QM beyond what has been experimentally demonstrated
That depends on the particular theorem. Bell's theorem for example does not rely on the full validity of QM, similar for many others. This implies to me that you haven't actually looked at the framework and are criticising it from a very abstract position of your own personal philosophy of science and your impression of what it must be.

Auto-Didact said:
If the only relevant property is that 'it supports integration', then you have removed all the physics and are left with just mathematics. 'It supports integration' is equally empty as the statement 'numbers are used in physics'.
It's not a proposal that the real space of states only has the property of supporting integration and nothing else. Remember how it is being used here. It is saying "If your model involves a state space that at least supports integration..."

So it constrains models where this (and four other assumptions) are true. It's not a proposal that nature involves only a set that involves integration and nothing else. The fact that you can prove theorems constraining such models shows it isn't as empty as "physics has numbers", to be honest that is just a kneejerk sneer at an entire field. Do you think if the framework was as useful as just saying "physics has numbers" that it would be accepted into major journals?

I think you are still treating the ontological models framework as an actual proposal for what nature is like, i.e. objecting to only looking at a state space that involves integration. Rather it is a presentation of general properties common to many models that attempt to move beyond QM and then demonstrating that from those properties alone one gets constraints.

i.e. Many models that attempt to replicate QM do have a state space that supports integration and that with four other properties is all you need to prove some theorems about them. Again all the actual models are richer and more physical than this, but some of their less pleasant to some properties follow from very general features like the integrability of the state space.

An analogue would be proving features of various metric theories of gravity. In such proofs you only state something like "the action possesses extrema", not because you're saying the action has that feature and nothing more, but because it's all you need to derive certain general features of such theories.

Auto-Didact said:
it would transform the manifold into exactly an epistemological object
I don't understand your use of epistemic I have to say. You seem to use it to mean abstract, but I don't see how a manifold is epistemic. "Stripped of physical content" maybe, but I don't know of any major literature calling this epistemic.

Auto-Didact said:
I'm not saying ##\mathcal{H}## shouldn't be involved
Well then coming back to where this originated, what makes it invalid as a definition of ##\psi##-ontic?
 
  • #92
Paul Colby said:
So, it should be fairly straight forward to reproduce the observed energy levels of a hydrogen atom. Please include hyperfine splitting and the Lamb shift in the analysis. How would such a calculation proceed?
Not necessarily, there are multiple routes:
1) Direct prediction of numerics based on experiment: this requires attractor reconstruction and unfortunately this usually isn't that simple. Usually to discover numerics, one would have do make very precise time series measurements, in this case of the vacuum polarization process and of extremely high-field electrodynamics, and then utilize the Ruelle-Takens theorem in order to identify the attractor; the problem here is that precise experimentation seems to be viciously complicated.

2) Direct prediction of numerics by guessing the correct NPDE: In order to characterize the actual numerics of orbits in QM without having precise measurements, requires essentially knowing the complete equations. Knowing the correct class of equations - giving qualitatively correct predictions of the general characteristics - is only a miniscule help w.r.t. identifying the uniquely correct NPDE. This is obviously because there is no superposition principle to help here.

3) Indirect: utilize the constructed spinor theory to rederive the Dirac equation and then guess the correct non-linearization thereof which incorporates renormalization as a physical process characterized by terms inside the new equation instead of an ad hoc procedure applied to an equation. This is far easier said than done, theorists have been attempting to do this since Dirac himself without any success so far.
 
  • #93
DarMM said:
Of course, as I have said, the theorems have assumptions, that's a given.
Its more important than you realize as it makes or breaks everything even given the truth of the 5 other assumptions you are referring to. If for example unitarity is not actually 100% true in nature, then many no-go theorems lose their validity.
DarMM said:
That depends on the particular theorem. Bell's theorem for example does not rely on the full validity of QM, similar for many others. This implies to me that you haven't actually looked at the framework and are criticising it from a very abstract position of your own personal philosophy of science and your impression of what it must be.
I have looked at the theorems. I should make clear that I am not judging all no-go theorems equally, I am saying each of them has to be judged on a case by case basis (like in law). Bell's theorem for example would survive, because it doesn't make the same assumptions/'mistakes' some of the other do. I am also saying just because Bell's theorem is valid, it doesn't mean the others will be as well.
DarMM said:
The fact that you can prove theorems constraining such models shows it isn't as empty as "physics has numbers", to be honest that is just a kneejerk sneer at an entire field.
I think you are misunderstanding me, but maybe only slightly. The reason I asked about the properties of the resulting state space is to discover if these properties are necessarily part of all models which are extensions of QM. It seems very clear to me that being integrable isn't the most important property of the state space ##\Lambda##.
DarMM said:
Do you think if the framework was as useful as just saying "physics has numbers" that it would be accepted into major journals?
Yes, definitely. I have seen 'very good' papers across many fields of science, including physics, finance, economics, neuroscience, medicine, psychology and biology with equally bad or worse underlying conceptual reasoning; a mere mention of the limitations of the conclusions due to the assumptions is all a scientist needs to do to cover himself. There is no reason to suspect physicists are better than other scientists in this aspect.

Journals, including major journals, tend to accept papers based on clear scientific relevance, strong methodology and clear results, and not based on extremely carefully reasoned out hypotheses; one can be as sloppy in coming up with hypotheses as one wants as long as a) one can refer to the literature that what he is doing is standard practice, and/or b) the hypothesis can be operationalized and that operationalization directly tested empirically.
DarMM said:
I think you are still treating the ontological models framework as an actual proposal for what nature is like, i.e. objecting to only looking at a state space that involves integration. Rather it is a presentation of general properties common to many models that attempt to move beyond QM and then demonstrating that from those properties alone one gets constraints.
That framework is a class of model, characterizing the properties of many models. The particular theorem(s) in question then in one swoop argue against the entire class.

A model moving beyond QM may either change the axioms of QM or not. These changes may be non-trivial or not. Some of these changes may not yet have been implemented in the particular version of that model for whatever reason (usually 'first study the simple version, then the harder version'). It isn't clear to me whether some (if not most) of the no-go theorems are taking such factors into account.
DarMM said:
I don't understand your use of epistemic I have to say. You seem to use it to mean abstract, but I don't see how a manifold is epistemic. "Stripped of physical content" maybe, but I don't know of any major literature calling this epistemic.
I quote the Oxford Dictionary:
Definition of 'epistemic' in English:
epistemic (adjective): Relating to knowledge or to the degree of its validation.

Origin: 1920s: from Greek epistēmē ‘knowledge’ (see epistemology) + -ic.
Definition of epistemology in English:
epistemology (noun, mass noun):
Philosophy
The theory of knowledge, especially with regard to its methods, validity, and scope, and the distinction between justified belief and opinion.

Origin: Mid 19th century: from Greek epistēmē ‘knowledge’, from epistasthai ‘know, know how to do’.
 
Last edited:
  • #94
Auto-Didact said:
Not necessarily, there are multiple routes:

Okay, so what I'm taking from your list of potential approaches is that the answer to my initial question on what the underlying system to which the "method" is applied, is at present completely unknown. I chose the example of the hydrogen atom because, at least in the current body of theory, it is a very specific and detailed dynamical system. Apparently, this new approach doesn't work on the hydrogen atom as is. It's going to be a hard sell.
 
  • #95
I'm trying to follow this discussion - which is interesting.
I am confused about how lattice models of quantum gravity fit (or don't) here.

My naive cartoon is that such a structure supports non-linearity with manifold-like properties. I mean Isn't iteration all that is required for some fractal generation?
There is the a-priori structure of a "causal lattice" of space-time geometry to explain but as epistemological ontologies go that's pretty minimal. Most importantly, as I understand it anyway, there are real calculators that are getting close to building the SM from them. In fact @atyy posted one in this very forum. I found it very very hard to get much from it tho - really hard.

https://www.physicsforums.com/threads/lattice-standard-model-wang-wen.958852/
 
Last edited:
  • #96
Auto-Didact said:
I quote the Oxford Dictionary:
How is a differentiable manifold epistemic though?
 
  • #97
Paul Colby said:
Okay, so what I'm taking from your list of potential approaches is that the answer to my initial question on what the underlying system to which the "method" is applied, is at present completely unknown.
No, partially unknown. It is known that the correct equation:
- is a NPDE
- is reducible to the Dirac equation in the correct limit
- describes vacuum fluctuations
- has a strange attractor in its state space
- has a parameter displaying period doubling

An equation has to be constructed with the above things as given.
Paul Colby said:
I chose the example of the hydrogen atom because, at least in the current body of theory, it is a very specific and detailed dynamical system. Apparently, this new approach doesn't work on the hydrogen atom as is. It's going to be a hard sell.
I will let Feynman tell you why having immediately such an unrealistic expectation of a preliminary model such as this one is extremely shortsighted.
Feynman said:
For those people who insist that the only thing that is important is that the theory agrees with experiment, I would like to imagine a discussion between a Mayan astronomer and his student. The Mayans were able to calculate with great precision predictions, for example, for eclipses and for the position of the moon in the sky, the position of Venus, etc. It was all done by arithmetic. They counted a certain number and subtracted some numbers, and so on. There was no discussion of what the moon was. There was no discussion even of the idea that it went around. They just calculated the time when there would be an eclipse, or when the moon would rise at the full, and so on.

Suppose that a young man went to the astronomer and said, ‘I have an idea. Maybe those things are going around, and there are balls of something like rocks out there, and we could calculate how they move in a completely different way from just calculating what time they appear in the sky’. ‘Yes’, says the astronomer, ‘and how accurately can you predict eclipses ?’ He says, ‘I haven’t developed the thing very far yet’. Then says the astronomer, ‘Well, we can calculate eclipses more accurately than you can with your model, so you must not pay any attention to your idea because obviously the mathematical scheme is better’.

There is a very strong tendency, when someone comes up with an idea and says, ‘Let’s suppose that the world is this way’, for people to say to him, ‘What would you get for the answer to such and such a problem ?’ And he says, ‘I haven’t developed it far enough’. And they say, ‘Well, we have already developed it much further, and we can get the answers very accurately’. So it is a problem whether or not to worry about philosophies behind ideas.
In other words, what you are asking is an important eventual goal post - one of several goal posts - which should be attempted to be reached. Arguing from a QG or QM foundations perspective it is important but definitely not the most important thing for the preliminary model to achieve at this stage.

In the ideal circumstance, this would be achieved in the format of a large research programme investigating the model, preferably with Manasson as the head of the research group and with PhD students carrying out the research.
 
  • Like
Likes Buzz Bloom
  • #98
Auto-Didact said:
In other words, what you are asking is an important eventual goal post - one of several goal posts - which should be attempted to be reached.

If 50 years of string theory has taught us anything it's something about chicken counting and hatching.
 
  • Like
Likes Auto-Didact
  • #99
DarMM said:
How is a differentiable manifold epistemic though?
Easy: if the manifold doesn't characterize an existing object, but merely characterizes knowledge. There are manifolds in information geometry which can be constructed using the Fisher information metric; these constructions are purely epistemic.

In fact, all objects in statistics based on probability theory are completely epistemic, because probabilities (and all related quantities such as distributions, averages, variances, etc) aren't themselves objects in the world but encodings of the relative occurrence of objects in the world.

Physics, outside of QM, is different because it directly refers to actually existing - i.e. ontic - properties of objects in the world like mass and velocity. This is why physics is clearly an empirical science, while probability theory is part of mathematics.
 
  • #100
Paul Colby said:
If 50 years of string theory has taught us anything it's something about chicken counting and hatching.
The research program should be initially limited to 10 years; if no empirical results are reached in 5 years, the budget should be halved. Another 5 years without anything but mathematical discoveries and it should be abandoned.
 
  • Like
Likes Fra

Similar threads

Back
Top