Foundations argument: Silberstein et al engage Hiley-channeling-Bohm

In summary: I found this quote below interesting. The author takes Russell's/Eddington's argument that since we are "blind" with respect to the intrinsic properties of "matter", we will likely never make much progress:But now it seems that Strawson is confusing here the possibility of the emergence of mind from scientifically described properties like mass, charge, or spin, with the possibility of the emergence of mind from the intrinsic properties that correspond to these scientific properties. It is indeed the case that mind cannot emerge from scientifically described extrinsic properties like mass, charge, and spin, but do we know that mind could not emerge from the intrinsic properties that underlie these scientifically observable properties? It might be argued that since we know absolutely nothing
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
"Foundations" argument: Silberstein et al engage Hiley-channeling-Bohm

Foundations of Physics (ed. 't Hooft) has accepted this Silberstein et al paper for publication.
http://inspirehep.net/record/922919?ln=en which seems to me to resurrect some Bohm trends of thought, perhaps in a new form, and engage those ideas with contrasting ones of the authors.

Several of us here (and maybe also in Cosmo forum) have an interest in the often very original ideas of David Bohm (expat American physicist 1917 - 1992) and might find it fun to check this out. Hopefully those who understand Bohmian thought better than I will correct any mistakes on my part.

Just for background: http://en.wikipedia.org/wiki/David_Bohm
Warning about what seems to me a tendency in Bohm's work toward mysticism decently clothed in abstract language:
http://en.wikipedia.org/wiki/Implicate_and_explicate_order_according_to_David_Bohm

Basil Hiley's work is in some sense part of the Bohm legacy. Hiley co-authored with Bohm
(e.g. http://arxiv.org/abs/quant-ph/0612002 ) after the latter moved to University of London's Birkbeck College.

Make no mistake, this is real Foundational stuff! It won't be to everyone's taste, by a long shot. They are speculating about what if anything could underlie the geometry and algebra of existence. And Silberstein et al have their own "pre-geometry" block universe to constrast with the Basil Hiley Bohmian view. A block universe that is not made of the familiar 19th Century differential manifold with its familiar Riemannian 4D continuum machinery, but is instead woven of relationships. It is a "relational blockworld" (RBW) universe which they refer to as "spacetimematter".
 
Last edited:
Physics news on Phys.org
  • #3


Incidentally some interesting-looking Bohm links:
http://www.bbk.ac.uk/lib/about/bohm
Basil Hiley seems to have been instrumental in preserving and assembling some of this.
Some Basil Hiley papers:
http://www.bbk.ac.uk/tpru/BasilHiley/ASQT.html
(interesting titles, and some links to PDF files)
Paul Davies introduces this as a 1981 paper by him, Bohm, and Hiley:
http://arxiv.org/abs/quant-ph/0612002
Algebraic Quantum Mechanics and Pregeometry
D.J. Bohm, P.G. Davies, B.J. Hiley
which Davies, for completeness, later (2006) posted on the arxiv.
 
Last edited:
  • #4


Thanks for that reference - it was indeed fun - and I come down on the Hiley's Monism side of things - "mind and matter are formed from, or reducible to, the same ultimate substance or principle of being"

And that ultimate "principle of being" would, for me, be information processing.

Most computers agree with me if they could posit things I feel sure.
 
  • #6


debra said:
...
And that ultimate "principle of being" would, for me, be information processing.

Most computers agree with me if they could posit things I feel sure.

Keep an eye out for the PF member called "RUTA" he knows about some of this stuff.
About what computers would tell us if they could speak "from the central processor" I have to plead intellectual modesty and caution.
I'm glad you found the links fun.:biggrin:
 
Last edited:
  • #7


debra said:
- and I come down on the Hiley's Monism side of things - "mind and matter are formed from, or reducible to, the same ultimate substance or principle of being". And that ultimate "principle of being" would, for me, be information processing.

I find it difficult to envision how stuff like qualia/subjectivity/the mental can emerge from stuff we would consider material/physical (now or by a future physics) or even informational? Although getting a clear definition on what information is no easy task in itself? I found this quote below interesting. The author takes Russell's/Eddington's argument that since we are "blind" with respect to the intrinsic properties of "matter", we will likely never make much progress:

But now it seems that Strawson is confusing here the possibility of the emergence of mind from scientifically described properties like mass, charge, or spin, with the possibility of the emergence of mind from the intrinsic properties that correspond to these scientific properties. It is indeed the case that mind cannot emerge from scientifically described extrinsic properties like mass, charge, and spin, but do we know that mind could not emerge from the intrinsic properties that underlie these scientifically observable properties? It might be argued that since we know absolutely nothing about the intrinsic nature of mass, charge, and spin, we simply cannot tell whether they could be something non-mental and still constitute mentality when organised properly. It might well be that mentality is like liquidity: the intrinsic nature of mass, charge and spin might not be mental itself, just like individual H2O-molecules are not liquid themselves, but could nevertheless constitute mentality when organised properly, just like H2O-molecules can constitute liquidity when organised properly (this would be a variation of neutral monism). In short, the problem is that we just do not know enough about the intrinsic nature of the fundamental level of reality that we could say almost anything about it.

Finally, despite there is no ontological difference between the micro and macro levels of reality either on the intrinsic or extrinsic level, there is still vast difference in complexity. The difference in complexity between human mentality and mentality on the fundamental level is in one-to-one correspondence to the scientific difference in complexity between the brain and the basic particles. Thus, even if the intrinsic nature of electrons and other fundamental particles is in fact mental, this does not mean that it should be anything like human mentality—rather, we can only say that the ontological category their intrinsic nature belongs to is the same as the one our phenomenal realm belongs to. This category in the most general sense is perhaps best titled ‘ideal’.

Mind as an Intrinsic Property of Matter
http://users.utu.fi/jusjyl/MIPM.pdf
 
Last edited by a moderator:
  • #8


bohm2 said:
I read it before but I found it a bit obscure and difficult to understand but maybe it's just me?

Alas most people find RBW difficult to comprehend. Jeffrey Bub said it took him three epiphanies to understand RBW and each epiphany would require a week of lecture to teach grad students.

Essentially, we work with a blockworld approach where past, present and future are all equally 'real' (see for an explanation of blockworld). Since we're working with 4D instead of (3+1)D, we're not thinking in terms of 3D entities or substances evolving in time. Rather, we understand that one needs to compute the probability for 4D regions "as a whole," i.e., without breaking it up into a time-evolved story about 3D 'things'. This is to say, we take the Feynman path integral approach literally. When you view QM like this, the mysteries disappear (as noted by Feynman -- sorry, I don't have the citation). Accordingly, there are no quantum 'entities' moving through the experimental equipment to cause detector clicks. Rather, such clicks are evidence of the relations that compose the equipment involved in this particular experimental procedure.

Anyway, according to this view, GR must be modified because we don't have empty spacetime, i.e., space, time and matter are co-constructed in our approach per a self-consistency criterion (SCC). While we do need to modify GR, Einstein's equations (EEs) are an excellent example of an SCC, since you can't specify the stress-energy tensor (SET) on the RHS without the metric (g) and you can't specify the Einstein tensor (function of g) on the LHS without SET on the RHS. You can understand the 4D "self-consistency" nature of GR by doing Regge calculus, i.e., discrete graphical approach to GR. Therein a solution is a value of SET and g on each link of the graph which satisfy a system of equations (one equation for every link of the graph), i.e., graphical counterpart to EEs. Once you have such a 4D solution, you may or may not be able to read off a (3+1)D story about entites moving in space as a function of time. If one understands the 4D solution as fundamental, rather than any particular (3+1)D story it allows, then one has no problem accepted 'uncaused' events such as the big bang. Anyway, that is what we advocate -- 4Dism. And, worse, 4Dism where relations (represented by graphical links) are the fundamental constituents not 3D 'things' with worldlines.

So, you build a graph that satisfies the SCC and represents your experimental process (to include a particular outcome) and use it compute the probability of that particular outcome. Probability is then interpreted per 4D frequency of occurrence in the blockworld. It's just the path integral approach taken literally as applied to relations in 4D instead of 3D 'things' evolving in time.

We used this idea to modify the Regge calculus Einstein-deSitter model and account for the large z supernovae data without accelerating expansion, i.e., no cosmological constant in a decelearting universe. That paper appeared in Class. Quant. Grav. last month (http://arxiv.org/abs/1110.3973). So, the interpretation has consequences elsewhere in physics.
 
Last edited by a moderator:
  • #9


I can comprehend an expanding block universe but think its unnecessary.

If we consider the universe is related to a computer simulation (which many do) then time is similar to how it would appear in a such a simulation.

All simulations require a ticking clock so that data / instructions can be added and cause the simulation to 'happen' at the output of its registers. So all movements on a screen (as in a normal computer) are actually discrete jumps and not continuous. The 'rate' of processing is clearly dictated by the clock tick rate.

So passage of time could be measured by counting ticks. The number of ticks has no meaning in itself - there is nothing that transcends that tick - data changes are then 'perceived' as a flow of time. But its an arbitrary final data state minus an initial data state counted in discrete steps.

I believe this cuts through many ontological issues.
 
  • #10


debra said:
I can comprehend an expanding block universe but think its unnecessary.

If we consider the universe is related to a computer simulation (which many do) then time is similar to how it would appear in a such a simulation.

All simulations require a ticking clock so that data / instructions can be added and cause the simulation to 'happen' at the output of its registers. So all movements on a screen (as in a normal computer) are actually discrete jumps and not continuous. The 'rate' of processing is clearly dictated by the clock tick rate.

So passage of time could be measured by counting ticks. The number of ticks has no meaning in itself - there is nothing that transcends that tick - data changes are then 'perceived' as a flow of time. But its an arbitrary final data state minus an initial data state counted in discrete steps.

I believe this cuts through many ontological issues.

How does this solve the measurement problem and how does it account for violations of Bells inequality?
 
  • #11


RUTA said:
We used this idea to modify the Regge calculus Einstein-deSitter model and account for the large z supernovae data without accelerating expansion, i.e., no cosmological constant in a decelearting universe. That paper appeared in Class. Quant. Grav. last month (http://arxiv.org/abs/1110.3973). So, the interpretation has consequences elsewhere in physics.

I would hope that folks would take note of this point. Here we have an interpretation of QM (and apparently GR effectively) that makes a specific prediction - one which I could imagine as being testable.

Wow! :smile:
 
  • #12


RUTA said:
How does this solve the measurement problem ...

Can you remind me what 'the measurement problem' is referring to? I am a bit out of practice at present ...
 
  • #14


RUTA said:

That reference above assumes that is are real physical objects in space-time. May not be the case! Now look at the simulation model:

A red spot on a computer screen travels from left to right.
Lets look closely at that.
The red spot travels in discrete 'jumps' from one location to another. When it is 'jumping' it is not on the screen at all. It only appears (for the refresh rate duration) for a brief time and then appears at the next location.

If we apply that model to the Universe then a 'particle' jumps in a similar way. Except for our Universe it only appears when required. It decoheres when the registers output a value in the Heisenberg area. When not decohering its not physically *there* at all - its a calculation and not a physical object. A pixel is not a physical object - it is a color value outputted at a location x,y,z,t - a number that we interpret as a red square object.



Its analagous to one pixel on a screen that is only outputting color values at its calculated location on the screen. When it is not outputting - or jumping, then it is not on the screen at all - its a calculated result of an algorithm in the background - not on the screen at all.

If we are in a simulation (and many say we are) then there are no physical objects at all and everything is - like Pythagoras said - number.

Your reply?
 
  • #15


RUTA said:
how does it account for violations of Bells inequality?

I am slightly rusty on Bells inequality but let's try it:

Two entangled particles in superposition output correlated results that violate Bells inequality.
So the polarizations of coupled photons at different locations do this and Bell's proves it cannot be due to hidden variables in the particles. Hope I am right there.

So how can it happen? - The two particles are pointers to the same memory location - so they both have knowledge of each others state. The pointers of the particles are not separated by physical distance because they are simply in memory. (In a computer the memory is a small chip)

So when one entangled particle decoheres revealing a random state the other particle 'knows' what that revealed state is because they are both referring to the same memory that defines them both.

I am hoping that Bell's inequality would be solved by such a set up and that I am not contradicting a fudamental here. Maybe I am wrong and such a system could not violate Bell's inequality. Any experts on Bell's here?



What do you think?
 
  • #16


debra said:
I am slightly rusty on Bells inequality but let's try it:

Two entangled particles in superposition output correlated results that violate Bells inequality.
So the polarizations of coupled photons at different locations do this and Bell's proves it cannot be due to hidden variables in the particles. Hope I am right there.

So how can it happen? - The two particles are pointers to the same memory location - so they both have knowledge of each others state. The pointers of the particles are not separated by physical distance because they are simply in memory. (In a computer the memory is a small chip)

So when one entangled particle decoheres revealing a random state the other particle 'knows' what that revealed state is because they are both referring to the same memory that defines them both.

I am hoping that Bell's inequality would be solved by such a set up and that I am not contradicting a fudamental here. Maybe I am wrong and such a system could not violate Bell's inequality. Any experts on Bell's here?



What do you think?
This works just fine as far as Bell is concerned. This is called "spooky action at a distance". Or in your model we can call it spooky connection of single memory cell to two different sets of adjacent memory cells.
 
  • #17


zonde said:
This works just fine as far as Bell is concerned. This is called "spooky action at a distance". Or in your model we can call it spooky connection of single memory cell to two different sets of adjacent memory cells.

I didn't have time to respond. Thanks for stepping in, zonde.
 
  • #18


debra said:
If we apply that model to the Universe then a 'particle' jumps in a similar way. Except for our Universe it only appears when required. It decoheres when the registers output a value in the Heisenberg area. When not decohering its not physically *there* at all - its a calculation and not a physical object.

But if it's not a physical object and just a calculation, how does one explain quantum interference?

I thought I'd post this other paper on RBW that may be useful for trying to make sense of this model:

Reversing the arrow of explanation in the Relational Blockworld: Why temporal becoming, the dynamical brain and the external world are all "in the mind"
http://philsci-archive.pitt.edu/3249/1/ZiF_05_stu.pdf
 
Last edited:
  • #19


RUTA said:
If one understands the 4D solution as fundamental, rather than any particular (3+1)D story it allows, then one has no problem accepted 'uncaused' events such as the big bang. Anyway, that is what we advocate -- 4Dism. And, worse, 4Dism where relations (represented by graphical links) are the fundamental constituents not 3D 'things' with worldlines.
But turning 3D dynamical world into 4D static blockworld does not change physical laws.
Instead of moving billiard balls we have spaghetti like objects extending in timelike directions (so obviously timelike directions differ from spacelike direction in 4D blockworld).

So we see as "normal" patterns those that are extending in timelike directions and (rather very limited) patterns extending in spacelike directions can emerge only as secondary patterns from timelike ones. And then "uncaused" events are just as strange in 4D blockworld as they are in 3D dynamical world.

Then we have this statement that relations are more fundamental than 'things'. Fine, but to claim that it makes some difference (in a consistent way) we would like to compare it with more classical approach by converting these patterns of relations into 3D dynamical representation. Or alternatively we can convert 3D dynamical laws into 4D static patterns. But it seems to me that mathematically there is no big difference between two representations and the only difference is how you visualize it.

So my question is what is this brand new thing about RBW? I don't see it.
 
  • #20


zonde said:
But turning 3D dynamical world into 4D static blockworld does not change physical laws.
Instead of moving billiard balls we have spaghetti like objects extending in timelike directions (so obviously timelike directions differ from spacelike direction in 4D blockworld).

So we see as "normal" patterns those that are extending in timelike directions and (rather very limited) patterns extending in spacelike directions can emerge only as secondary patterns from timelike ones. And then "uncaused" events are just as strange in 4D blockworld as they are in 3D dynamical world.

Then we have this statement that relations are more fundamental than 'things'. Fine, but to claim that it makes some difference (in a consistent way) we would like to compare it with more classical approach by converting these patterns of relations into 3D dynamical representation. Or alternatively we can convert 3D dynamical laws into 4D static patterns. But it seems to me that mathematically there is no big difference between two representations and the only difference is how you visualize it.

So my question is what is this brand new thing about RBW? I don't see it.

In current thinking, the billiard balls are made of molecules (with spagetti-like worldlines) and the molecules of atoms (again, with worldlines), etc. In fact, what particle detectors do in part is find the worldlines of fundamental particles, then the curve fitting parameters yield particle properties such as mass and charge. In RBW, the fundamental constituents are relations, not particles with worldlines, so the fundamental rule is not a dynamical law about particles (as it is with particle physics via interacting fields). The entire enterprise is thus different, we must find this fundamental adynamical rule for relations that does result statistically in dynamical laws for things with worldlines.

Here is a good way to understand how this difference is manifested conceptually in an experimental situation. In current thinking, the experimental outcome of a high-energy particle experiment includes particle tracks in the detector. The actual data is thousands of individual detector clicks so that the particle tracks are constructed by curve fitting through detector clicks. The particles/curves are then the fundamental entities according to the theory. In RBW, the individual clicks are fundamental -- or more precisely, they represent individual relations which are fundamental. See how this changes the game dramatically?

Anyway, the FoP paper just accepted (topic of thread) explains our new approach to fundamental physics. A self-consistency criterion (SCC, Kv = J) governs the construct of graphs whence transition amplitudes for various processes (K and J are constructed from boundary operators on the graph, v is the vector of vertices). As an analogy, think Regge calculus (graphical version of GR), where one uses the resulting graph to compute transition amplitudes. The paper shows how the proposed SCC (which follows from the boundary of a boundary principle, dd = 0, as do GR and EM) necessarily yields gauge invariance (and, therefore, gauge fixing) and divergence-free sources, and how it yields the 'spagetti-like world' statistically.

The paper also uses this idea to resolve QM mysteries. I'll let you read the paper, but hopefully it will be clear that we are proposing a very different way to 'explain' reality.
 
  • #21


Let me add an example of 4D thinking to my explanation above. In Regge calculus, one is to find the metric g and stress-energy tensor T on every link of the graph that solve Regge's equations (obtained from extremum of grahical action). So, if someone asked, "Why is there 5 kg*m/s of momentum on link X?" the answer would be, "Because that is what results from the values of T and g on link X and those values are needed to satisfy Regge's equations everywhere else on the graph. If you changed T and g on X, you would have to change them on link Y and Z and ... . Then you would have a different solution." Do you see how this differs from a (3+1)D explanation involving the history of force on some particle?
 
  • #22


RUTA said:
In current thinking, the billiard balls are made of molecules (with spagetti-like worldlines) and the molecules of atoms (again, with worldlines), etc. In fact, what particle detectors do in part is find the worldlines of fundamental particles, then the curve fitting parameters yield particle properties such as mass and charge. In RBW, the fundamental constituents are relations, not particles with worldlines, so the fundamental rule is not a dynamical law about particles (as it is with particle physics via interacting fields). The entire enterprise is thus different, we must find this fundamental adynamical rule for relations that does result statistically in dynamical laws for things with worldlines.
Laws for things with worldlines are not dynamical. They do not change with time.

RUTA said:
Here is a good way to understand how this difference is manifested conceptually in an experimental situation. In current thinking, the experimental outcome of a high-energy particle experiment includes particle tracks in the detector. The actual data is thousands of individual detector clicks so that the particle tracks are constructed by curve fitting through detector clicks. The particles/curves are then the fundamental entities according to the theory. In RBW, the individual clicks are fundamental -- or more precisely, they represent individual relations which are fundamental. See how this changes the game dramatically?
Well, I suppose that the first thing you have to explain is - what are "relations" in RBW?

Just declaring that "relations" are more fundamental than "particles" ... it's hardly something. What we want from theory are reusable descriptions of patterns that we see in the blockworld. So can you provide arguments that looking at "relations" instead of "particles" will make descriptions of patterns better?


RUTA said:
As an analogy, think Regge calculus (graphical version of GR), where one uses the resulting graph to compute transition amplitudes.
It is first time I hear about Regge calculus so it does not work as explanation. I would say that if you want to explain something you have to stick with pretty common things.
 
  • #23


zonde said:
Laws for things with worldlines are not dynamical. They do not change with time.

Here is what I mean by "dynamical laws" as found in the section called "How the universe works" of Sean Carroll's http://blogs.discovermagazine.com/cosmicvariance/2012/04/28/a-universe-from-nothing/

Let’s talk about the actual way physics works, as we understand it. Ever since Newton, the paradigm for fundamental physics has been the same, and includes three pieces. First, there is the “space of states”: basically, a list of all the possible configurations the universe could conceivably be in. Second, there is some particular state representing the universe at some time, typically taken to be the present. Third, there is some rule for saying how the universe evolves with time. You give me the universe now, the laws of physics say what it will become in the future. This way of thinking is just as true for quantum mechanics or general relativity or quantum field theory as it was for Newtonian mechanics or Maxwell’s electrodynamics.

zonde said:
Just declaring that "relations" are more fundamental than "particles" ... it's hardly something. What we want from theory are reusable descriptions of patterns that we see in the blockworld. So can you provide arguments that looking at "relations" instead of "particles" will make descriptions of patterns better?

Depends on what you mean by "better." Of course we think the idea is a "better" interpretation of QM and QFT, but that's subjective. What's not subjective is that it leads to a new approach to unification and quantum gravity (see our CQG paper).

zonde said:
Well, I suppose that the first thing you have to explain is - what are "relations" in RBW? ... It is first time I hear about Regge calculus so it does not work as explanation. I would say that if you want to explain something you have to stick with pretty common things.

Well, if by "pretty common things" you mean 3D objects evolving in time, I can't help you. That's the whole point -- 4D relations replace 3D objects evolving in time as the fundamental entities and a self-consistency criterion for the 4D relations replaces dynamical laws for the 3D time-evolved objects. Regge calculus is an excellent example, but I can't teach you that here -- you can read about it online, I'm sure. But, briefly, here is how we propose to view Regge calculus per a 4D perspective.

You solve Regge's equations to find the metric g and stress-energy tensor T for each link on the graph. Regge's eqns are obtained from a graphical least action principle, but just like GR, you can't say what you mean by T without knowing g (LHS of Einstein's eqn) and you can't know g without knowing T (RHS of Einstein's eqn). Thus Regge's eqns (counterpart to Einstein's eqn) are a self-consistency criterion, i.e., each solution provides a self-consistent T and g for each link of the graph where "self-consistent" is dictated by Regge's eqns. Now suppose you find a solution and someone asks, "Why is there 5 kg*m/s of momentum on link X?" The answer is, "Because g and T on link X give 5 kg*m/s of momentum and if you changed g and T on X, you'd have to change g and T on Y and Z and ... to solve Regge's equations. That is, you'd have a different self-consistent set of g and T on the graph (a different solution)." Do you see how very different this explanation is than one involving the history of forces acting on some particle? [Sorry for the repeat of this last point on this thread.]
 
Last edited:
  • #24


RUTA said:
Here is what I mean by "dynamical laws" as found in the section called "How the universe works" of Sean Carroll's http://blogs.discovermagazine.com/cosmicvariance/2012/04/28/a-universe-from-nothing/

Let’s talk about the actual way physics works, as we understand it. Ever since Newton, the paradigm for fundamental physics has been the same, and includes three pieces. First, there is the “space of states”: basically, a list of all the possible configurations the universe could conceivably be in. Second, there is some particular state representing the universe at some time, typically taken to be the present. Third, there is some rule for saying how the universe evolves with time. You give me the universe now, the laws of physics say what it will become in the future. This way of thinking is just as true for quantum mechanics or general relativity or quantum field theory as it was for Newtonian mechanics or Maxwell’s electrodynamics.
Then say "dynamics laws" or "laws of dynamics", don't say "dynamical laws".

RUTA said:
Depends on what you mean by "better." Of course we think the idea is a "better" interpretation of QM and QFT, but that's subjective. What's not subjective is that it leads to a new approach to unification and quantum gravity (see our CQG paper).
Of course as a proponent of the idea you should think it's better. But what arguments you can provide for your position? And in whatever way you mean "better".

RUTA said:
Well, if by "pretty common things" you mean 3D objects evolving in time, I can't help you. That's the whole point -- 4D relations replace 3D objects evolving in time as the fundamental entities and a self-consistency criterion for the 4D relations replaces dynamical laws for the 3D time-evolved objects. Regge calculus is an excellent example, but I can't teach you that here -- you can read about it online, I'm sure. But, briefly, here is how we propose to view Regge calculus per a 4D perspective.
RBW speaks about two things - blockworld and relations. I am saying that I have no problems with blockworld concept. Worldlines of particles are 4D objects in blockworld.
But please explain how "relations" differ from "worldlines". Both are 4D objects. How can you define (describe) "relations" in RBW (given worldlines)?
 
  • #25


zonde said:
Of course as a proponent of the idea you should think it's better. But what arguments you can provide for your position? And in whatever way you mean "better".

There are items of personal preference of course, but the most compelling reason we have for believing RBW is "better" than other interpretations of QM is that RBW suggests corrections to GR (as in CQG paper). These changes provide a new path to (indeed, a new understanding of) quantum gravity and unification.


zonde said:
RBW speaks about two things - blockworld and relations. I am saying that I have no problems with blockworld concept. Worldlines of particles are 4D objects in blockworld. But please explain how "relations" differ from "worldlines". Both are 4D objects. How can you define (describe) "relations" in RBW (given worldlines)?

Worldlines in spacetime are constructed from relations graphically -- see figures 1 and 2 of the FoP paper http://arxiv.org/abs/1108.2261. Briefly, Kv = J summarizes the topological situation (e.g., number and "duration" of worldlines involved) and different field configurations on the graph yield different geometries (spacetime configuration of the worldlines). The probability of any particular geometry is given by the transition amplitude computed for the graph, where K is the difference matrix and J is the source vector, evaluated at that geometry.

So, here is how it works conceptually. An experimental process involving objects such as beam splitters, mirrors, sources, detectors, etc. is modeled graphically. Those are the "objects" of Figure 2(a). Now, there is some underlying relational composition of those experimental objects involved in that experimental process as represented by Figure 1(b). In that context, the experimental outcome reflects one of those relations (Figure 2(b)). In this view, there are no other "objects" involved in the experiment, i.e., no "quantum entities" moving as waves or particles among the experimental objects to "cause" the experimental outcome. The "true" quantum/fundamental entities are the relations and they don't "have" worldlines, they "make" the worldlines and spacetime context for the experimental objects. [Note: Space, time and matter are co-constructed from relations, so one doesn't think of "matter in spacetime" or "matter warping spacetime" but rather, one thinks of an inseparable "spacetimematter." This is one way we differ from GR, i.e., one can have vacuum solutions in GR.]

The reason quantum outcomes are statistical is because many different relational configurations can give rise to a particular experimental process. As an analogy, there are many different distributions of molecular velocities that can give rise to a particular temperature for some gas. Thus, one can only ask questions such as, "What is the probability of finding relation X in experimental procedure Y?"

Hope this helps.
 
  • #26


RUTA said:
So, here is how it works conceptually. An experimental process involving objects such as beam splitters, mirrors, sources, detectors, etc. is modeled graphically. Those are the "objects" of Figure 2(a). Now, there is some underlying relational composition of those experimental objects involved in that experimental process as represented by Figure 1(b). In that context, the experimental outcome reflects one of those relations (Figure 2(b)). In this view, there are no other "objects" involved in the experiment, i.e., no "quantum entities" moving as waves or particles among the experimental objects to "cause" the experimental outcome. The "true" quantum/fundamental entities are the relations and they don't "have" worldlines, they "make" the worldlines and spacetime context for the experimental objects. [Note: Space, time and matter are co-constructed from relations, so one doesn't think of "matter in spacetime" or "matter warping spacetime" but rather, one thinks of an inseparable "spacetimematter." This is one way we differ from GR, i.e., one can have vacuum solutions in GR.]

The reason quantum outcomes are statistical is because many different relational configurations can give rise to a particular experimental process. As an analogy, there are many different distributions of molecular velocities that can give rise to a particular temperature for some gas. Thus, one can only ask questions such as, "What is the probability of finding relation X in experimental procedure Y?"

Hope this helps.
What is represented by boxes in these 1(b) and 2(b) figures? Are they different relational configurations or something else?


There are some quite interesting things in that (http://arxiv.org/abs/1108.2261) paper. But there are things that are rather unacceptable to me. For example this:
"Thus, RBW provides a wave-function-epistemic account of quantum me-
chanics with a time-symmetric explanation of interference via acausal global
constraints
[17]. Quantum physics is simply providing a distribution func-
tion for graphical relations responsible for the experimental equipment and
process from initiation to termination. So, while according to some such
as Bohmian mechanics, EPR-correlations and the like evidence superlumi-
nal information exchange (quantum non-locality), and according to others
such correlations represent non-separable quantum states (quantum non-
separability), per RBW these phenomena are actually evidence of the deeper
graphical unity of spacetimematter responsible for the experimental set up
and process, to include outcomes[16][17]. RBW is therefore integral calculus
thinking writ large[16][19]."
I do not see that blockworld means acausality. It just transforms causality into types of patterns that we do observe in blockworld (vs patterns that we do not observe).
So this explanation seems to suggest conspiracy type explanation under cover of blockworld.

Another is confrontation between algebra and geometry. Well I suppose that geometric approach is more appealing for me but just the same I do not like the idea that two approaches would give different results.
 
  • #27


zonde said:
What is represented by boxes in these 1(b) and 2(b) figures? Are they different relational configurations or something else?

The boxes are just graphical nodes. We made them boxes so they were clearly visible along parallel links.

zonde said:
I do not see that blockworld means acausality. It just transforms causality into types of patterns that we do observe in blockworld (vs patterns that we do not observe). So this explanation seems to suggest conspiracy type explanation under cover of blockworld.

The blockworld *contains* causality, but the construct of the blockworld is not *based* on causality, it's based on a self-consistency criterion. Thus, the ultimate (most fundamental) answer to "Why ... ?" is not a causal story. Again, an analogy is Einstein's eqns of GR. If you get caught up in a dynamical/causal (3+1)D story for GR cosmology, you're stuck at the Big Bang with an unexplainable event. But, if you view EEs as a self-consistency criterion, the Big Bang is no more mysterious than any other event on the spacetime manifold.

zonde said:
Another is confrontation between algebra and geometry. Well I suppose that geometric approach is more appealing for me but just the same I do not like the idea that two approaches would give different results.

I'm not sure I understand your comment. Would you please elaborate?
 
  • #28


RUTA said:
The boxes are just graphical nodes. We made them boxes so they were clearly visible along parallel links.
So are vertical lines like worldlines of equipment?

RUTA said:
The blockworld *contains* causality, but the construct of the blockworld is not *based* on causality, it's based on a self-consistency criterion. Thus, the ultimate (most fundamental) answer to "Why ... ?" is not a causal story. Again, an analogy is Einstein's eqns of GR. If you get caught up in a dynamical/causal (3+1)D story for GR cosmology, you're stuck at the Big Bang with an unexplainable event. But, if you view EEs as a self-consistency criterion, the Big Bang is no more mysterious than any other event on the spacetime manifold.
As I see this self-consistency criterion is the same thing as global conspiracy that experiments show results consistent with entanglement. So it is useless as an explanation.

RUTA said:
I'm not sure I understand your comment. Would you please elaborate?
For example this:
"There has been a very long standing debate in Western philosophy and
physics regarding the following three pairs of choices about how best to model
the universe: 1) the fundamentality of being versus becoming, 2) monism
versus atomism and 3) algebra versus geometry broadly construed; more
generally, which of the myriad formalisms will be most unifying."
 
  • #29


zonde said:
So are vertical lines like worldlines of equipment?
Yes.


zonde said:
As I see this self-consistency criterion is the same thing as global conspiracy that experiments show results consistent with entanglement. So it is useless as an explanation.

So, you think Einstein's equations are useless as an explanation? It's a matter of personal preference, of course, but certainly there are people who demand explanation be in the form of interacting, time-evolved "things." For them, the reason I gave for why there is 5 kg*m/s of momentum on link X per Regge calculus does not constitute an "explanation." I imagine there were Aristotelians who did not consider reasons per Newtonian mechanics "explanatory" either. Since we are proposing a very different ontology, we are de facto proposing a different understanding of what it means to "explain." That is, our fundamental ontological entities are not time-evolved "things," so our fundamental explanations are not stories about the interactions of such "things."

zonde said:
For example this:
"There has been a very long standing debate in Western philosophy and
physics regarding the following three pairs of choices about how best to model
the universe: 1) the fundamentality of being versus becoming, 2) monism
versus atomism and 3) algebra versus geometry broadly construed; more
generally, which of the myriad formalisms will be most unifying."

You said you don't like the idea that "two approaches would give different results." So, in this context you're saying you don't like the idea that "two approaches" would give "different forms of unification?"
 
  • #30


RUTA said:
Yes.
Hmm, then isn't your idea similar to Heisenberg picture where time evolution is applied to operators (measurement equipment) rather than state?


RUTA said:
So, you think Einstein's equations are useless as an explanation?
So are you saying that Einstein's field equations are like your consistency criterion?
Stress-energy tensor determines curvature of spacetime. But can there be stress-energy tensor that does not have valid solution for curvature of spacetime in future direction (even if it has valid solution in past direction) and for that reason we exclude particular configuration as a rule?

Wouldn't then all configurations leading to singularities be excluded as a rule. But it isn't the case.

RUTA said:
It's a matter of personal preference, of course, but certainly there are people who demand explanation be in the form of interacting, time-evolved "things." For them, the reason I gave for why there is 5 kg*m/s of momentum on link X per Regge calculus does not constitute an "explanation." I imagine there were Aristotelians who did not consider reasons per Newtonian mechanics "explanatory" either. Since we are proposing a very different ontology, we are de facto proposing a different understanding of what it means to "explain." That is, our fundamental ontological entities are not time-evolved "things," so our fundamental explanations are not stories about the interactions of such "things."
I am not sure it is matter of personal preference. If such an approach as yours makes the idea not falsifiable in principle then it would be preference of scientific research.
 
  • #31


zonde said:
Hmm, then isn't your idea similar to Heisenberg picture where time evolution is applied to operators (measurement equipment) rather than state?

Our approach represents every piece of experimental equipment as they relate to each other in the experimental process.

zonde said:
So are you saying that Einstein's field equations are like your consistency criterion?

Yes, although our equation gives K and J for the transition amplitude, not a classical outcome like Einstein's equations (EEs).

zonde said:
Stress-energy tensor determines curvature of spacetime.

The problem is that the curvature of spacetime is a function of the metric g and so is the stress-energy tensor T. So, which is specified and which is solved for in EEs? The answer is, you must solve EEs for T and g "simultaneously." That's why we use the phrase "self-consistency criterion" to describe EEs -- they constitute a "self-consistent" relationship between T and g.

zonde said:
But can there be stress-energy tensor that does not have valid solution for curvature of spacetime in future direction (even if it has valid solution in past direction) and for that reason we exclude particular configuration as a rule?

Wouldn't then all configurations leading to singularities be excluded as a rule. But it isn't the case.

Do we have GR solutions with singularities? Well, if by "solution" you mean a finite T and g for all points of the spacetime manifold, then there are no "solutions" with singularities. We do make solutions from "near" solutions by omitting singular points, e.g., by omitting the Big Bang in FRW cosmologies. We then assume that the singular point omitted to make the GR solution is finite in some (yet to be discovered) theory fundamental to GR.

zonde said:
I am not sure it is matter of personal preference. If such an approach as yours makes the idea not falsifiable in principle then it would be preference of scientific research.

I'm assuming that we are only talking about scientific approaches, i.e., those which are in principle falsifiable. Then one chooses which, if any, he is willing to work on based on personal preference.
 
  • #32


In my view, the RBW approach sounds essentially like being very carefully true to what the scientific process actually is and actually can support, rather than entering into more pretentious modes of thought about what we'd like science to be or what we imagine it should be, but never demonstrably was. So I appreciate your careful description of it. Some of it echoes Bohr's brilliant "there is no quantum world," so perhaps it is an epistemological cousin to the Copenhagen interpretation, though I'm sure you can be quick to point out more essential differences and the potential for predictive character.

Ultimately, I do suspect that to make further progress, physics itself will need to recast its fundamental mission, in language that escapes the naive "God's eye" framing of past physics models, and embraces, perhaps, more internally consistent language like what you are striving for so meticulously. I realize that some still hold that a "God's eye" view continues to be the primary goal of physics ("God is a mathematician" and so forth), so to them, adopting your kind of language about reality would be the death of the mission of physics, rather than further progress in its birthing process. But I don't agree with them.
 
Last edited:
  • #33


RUTA, I have question how relations should look like in spacetime picture of interference. Say how are these two pictures represented using relations:
116qljc.jpg
14t9f04.jpg


Basically the question is how do you handle situation where two different paths start and end at the same worldline.
 
  • #34


RUTA said:
I'm assuming that we are only talking about scientific approaches, i.e., those which are in principle falsifiable. Then one chooses which, if any, he is willing to work on based on personal preference.
I do not understand how we can set up experiment that could test retrocausal prediction. And I am not even sure I want to discuss that, sorry. And if you say that GR is making retrocausal predictions then I consider this to be serious argument against GR.

Well I have to admit that to me it seems like process of evolution can produce effects that could look very much like retrocausality while being perfectly causal.
 
  • #35


zonde said:
RUTA, I have question how relations should look like in spacetime picture of interference. Say how are these two pictures represented using relations:
116qljc.jpg
14t9f04.jpg


Basically the question is how do you handle situation where two different paths start and end at the same worldline.

The situations are different and would be modeled differently. The situation on the left has overlapping connections with the objects on the sides and sequential connections at the end. The situation on the right has sequential connections with objects on the sides and overlapping connections with the object at the end. To understand what would be involved in trying to do this using our fundamental graphical approach, you'll need to read the analysis in http://arxiv.org/ftp/arxiv/papers/0908/0908.4348.pdf . Start on p 22 at "Moving now to N dimensions, the Wick rotated version ..." through the solution on p 25, i.e., eqns 23-25. Then read section 3.4 Twin-Slit Experiment. All that analysis would be required just to do the upper half of your situation on the right.
 
Back
Top