So what about the FQXi time essay contest? It's February already.

In summary: Loops, Observables and the Flow of Time"*(2b). (Tie!): Tejinder Singh on "The Nature of Time"*In summary, FQXi launched an essay contest in 2008 on the nature of time, with winners to be announced in February 2008. However, due to the recession, there were delays in announcing the winners until March 2009. The winners were Julian Barbour, Claus Kiefer, Sean Carroll, Carlo Rovelli, George F. R. Ellis, Rodolfo Gambini, Jorge Pullin, and Tejinder Singh. Their essays explored the concept of time in classical dynamics, quantum gravity, string theory, and the nature
  • #36
I haven't follow the entire thread or context due to lack of time lately :( but one of a few things that stick out in the timeless reasoning of Rovelli and others, that I mysteriously doesn't find consistent with this RQM spirit is that he without headache pullls out stuff like a state space, and never ever seem to question how this state space is "communicated". There is some incoherent use of reasoning here, from my perspective.

As Rovelli so excellently argues in his RQM paper, the only way to compare information between observers is by physical interactions.

But what happened to the information implied in notions such as "state space"?

This doesn't make sense to me. I'm much more in liking of the way Smoling argues that state spaces are evolving. And I think the natural way to unite that with the RQM reasoning is to try to describe the physical process, whereby the state space is formed, as seen by a specific choice of observer.

The same goes for the notion of laws, which certainly contains information.

What bugs me more than anything else, is that the observable ideal, seems to oddly apply to some mathematical objects (say states) but not to other things (such as state spaces and laws).

So far smolins evolution ideas seem to be the only sensible resolution. He has some great points regarding inside vs outside observers. The physics as seem by a true inside observer is still missing. Alot of the conclusions seems to have been drawn from experience with physics where one can consider and external observer, and than arguments are made that by analogy this should hold also for the general case. I have very hard to find such arguments convincing.

/Fredrik
 
Physics news on Phys.org
  • #37
Peter Morgan said:
...When we have a probabilistic or quantum-mechanical theory, we don't even have the individual events in the mathematics, we don't even have statistics, we only have expected values. Of course there are engineering rules for how we should make the comparison between statistics and expected values, and some people are much better at using those heuristics than others...

Hello Peter, it seems you are bothered by the lack of proper link between the abstraction of probability distributions and the more physical stream of individual events?

I recall that from past discussions on your random field paper you seek different solutions, but this is a key problem that I think is at the heart of the problem, and it's what has lead me to find a reconstruction of the notion of probability, which includes a reconstruction of the continuum. The idea would be to consider the emergence of the probability distributions and probabiltiy spaces to be a physical process. Thus the notion that some experiments are effectively just observed once or twice, means that those probability spaces themselves are very volatile if one assigns statistical weights to the probability spaces themselves, and if one takes this to be a physical process of a self-organizing observer, this very process is overall constrained by the complexity of the observer. Thus explaining the two extreme cases of observations, we humans studying a small subsystem in an accelerator vs a human based observatory making observations on the cosmological scale. Such reconstruction, should make explicit the difference in building the statistics properly as the relative complexity of observer vs observed would be a key.

I have a feeling that somehow you were looking for an alternative solution, to this problem.

As far as I understand it, Rovelli explicitly avoids this problem, as he in his book writes in the footnote that he doesn't want to discussing the meaning of probabiltiy. Up until that point, I think Rovelli is very clear.

/Fredrik
 
  • #38
As I think you hinted, this probably is present also pre-QM. And related to the ergodic hypothesis etc. But I think it escaped analysis at that time since I think it's the advent of the higher standards of QM, as a _theory of observables_, that these problems became more obvious. To do QM without facing these questions, is like trying to use measurement gages to chart the world, without for a second considering that it may be relevant to understand the process by which these measurement gages was devised, and that there is even an ongoing revision of these gages.

Ie. it's like a theory of measurement, without a theory of measurement devices used to produce and store the measurement results. It seems the only reasonable explanation why these problems are still around is that it hasn't been faishonable physics?

/Fredrik
 
  • #39
Hi, Fredrik.

I'm not much bothered by the lack of relationship between statistics of events and probability densities in mathematical models. I'm happy enough to be fairly pragmatic about the relationship between models and the world, in what I would call a post-positivist way.

I don't like other people's rather Platonic accounts of the relationship between mathematics and the world, which typically assert a metaphysical interpretation of probability (something like propensities). I don't insist on verification or on falsifiability, which I think are interesting but crude measures of how interesting a theory is, but I'd nonetheless like something with a little more solidity than that. I think Carlo Rovelli wants to be only slightly Platonic, at least compared to some of the other characters in the QG game, so I can't get too excited about him not wanting to expend a lot of effort on not discussing the relationship between statistics and probability.

I see your concern, but I think I'm comfortable with more-or-less glossing the distinction between statistics and probabilities, most of the time, so I can try to address what I take to be more pressing worries. I think that's somewhat different from Carlo Rovelli's position, insofar as my philosophical position is more empiricist.

Earlier today, I read the physics arxiv blog comments on http://www.technologyreview.com/blog/arxiv/23144/" (longer, more worthwhile). What's the relevance to your comments? To do compressed sensing, we do randomly chosen measurements, which allow us to reconstruct a model (prototypically a photograph, but more generally anything that requires a lot of data to be gathered) of the real world. There is a sense in which No Measurement Is Repeated, but all the measurements are instances from a desired class of measurements. Your comment, "Thus the notion that some experiments are effectively just observed once or twice, means that those probability spaces themselves are very volatile", fits quite nicely into this approach. Something that is observed once or twice (ball lightning, perhaps? Supernovae, etc.) is placed into the giant compressed sensing dataset that is everything ever observed by a physicist, out of which pops, with a little gentling, a model that approximates all the data, despite gaping holes in the data. [Of course, it's a stretch to say that all of Physics is a compressed/ive sensing process.] The isolated measurements may have pride of place or not, but in some statistical sense they will fit in the overall pattern.

The detailed algorithmic process of engineering a description, such as compressed/ive sensing, constitutes a pragmatic theory of measurement that is very active. Not that more and different can't be done and be interesting and perhaps even fashionable.

I don't think the distinction between statistics and probabilities "escaped analysis" in classical statistical physics. There was a lot of effort to give a solid grounding to the ergodic hypothesis, for example, as I'm sure you know, but no-one was much satisfied with the strength of the assumptions that had to be made to be able to prove much.

I'm sorry to say that I suspect I haven't engaged much with your argument. I think I got sidetracked at a few points.

Peter.
 
Last edited by a moderator:
  • #40
Peter Morgan said:
I'm not much bothered by the lack of relationship between statistics of events and probability densities in mathematical models. I'm happy enough to be fairly pragmatic about the relationship between models and the world, in what I would call a post-positivist way.

Ah ok, I think I remember now, from the last discussions.

Peter Morgan said:
What's the relevance to your comments? To do compressed sensing, we do randomly chosen measurements, which allow us to reconstruct a model (prototypically a photograph, but more generally anything that requires a lot of data to be gathered) of the real world. There is a sense in which No Measurement Is Repeated, but all the measurements are instances from a desired class of measurements. Your comment, "Thus the notion that some experiments are effectively just observed once or twice, means that those probability spaces themselves are very volatile", fits quite nicely into this approach. Something that is observed once or twice (ball lightning, perhaps? Supernovae, etc.) is placed into the giant compressed sensing dataset that is everything ever observed by a physicist, out of which pops, with a little gentling, a model that approximates all the data, despite gaping holes in the data. [Of course, it's a stretch to say that all of Physics is a compressed/ive sensing process.]

Datacompression is conceptually highly relevant to my thinking and I think I see your point.

I picture self-organization as a kind of re-organisation of data, which effectively is a kind of datacompression, but there is one important difference to normal data compression, that is quite analogous to the human brain. Research has indicated that the human brain does not optimize storage of memories for historical accuracy, instead the memory is reorganized and optimized to be of maximum value in order to be of maximum use to predict the future. This is an interesting aspect. I think the story is similar in physics. The whole point with retaining the past, is that by beeing able to predict the future, the observer can increase it's own odds at "survival" (in the evolutionary picture). Thus some historical accuracy is lost or deformed, for the benefit of more fitness in the future. The brain self-organized and decides HOW to deform and tweak this data, it's an intelligent form of evolving datacompression "algorithm", that is not possible to predict. The evolving picture is critical. This is simliar to Smolins arguments, taken also from biology, that it is in general impossible to replace an evolving configuration space with a large super-space, because it would be so big that the computer necessary to perform even the simplest calculations would not fit on earth!

edit: this kind of conclusions, is what is totally lost when holding a too strong mathematical attitude, the complexity or representation and computations, are IMHO constrained by the physics, and should therefor constrain the mathematics. Hopefully this should help us build not only self-consistent mathematics from the armchair position, but also a mathematics that is consistent with the physical constraints of computation and representation.

This means that the data storage can not be a simple time history. It would be far too inefficient, given that the memory structure is limited. This is why patterns must be found and then a compressed, and truncaced form of the history must be stored. This contains many interesting problems, such as decision problems as to what data to discard.

There is IMHO even very interesting analogies with this decision process and discarding of "useless information", to black hole hawking type radiation. I might well be considered as "useless" - containing no information - from the point of view of the black hole itself, but it might well contain information relative to a large outside observer. This is one think I have in mind, that I think this reasoning can be applied to.

But also the way a system interacts, and how the different forces unifiy, might relate to the datacompression, as having mixed and truncated some more fundamental data stream. The one pixel camera makes me associate to the "one-state", communication channel, it might still possible to create a complex stage image, from driving the compressed and evolving image from such one-state channel. So that in effect, eventually that communication channel might grow.

/Fredrik
 
Last edited:
  • #41
Hey Peter, I don't know if you've seen Holger Nielsen's reasoning that he calls "random dynamics". I admit it was some time since I skimmed that and I don't recall the details and to what extent I share his reasoning, but I remember that he quite ambitious.

The suggestive name he chose, is why I can to think of it here, since you talk about random fields :)

He's a danish physicists at NBI.
http://en.wikipedia.org/wiki/Holger_Bech_Nielsen

His Random dynamic reasoning is loosely lined out at
http://www.nbi.dk/~kleppe/random/qa/qa.html

Among other things he says

"What we take as the fundamental "world machinery", is a very general, random mathematical structure , which contains non-identical elements and some set-theoretical notions."

If I am not mixing this up with someone else, I think the Lee Smolin also briefly mentioned Holger Nielsen in one of his Cosmological Natural Selection papers. But I don't remember which paper.

/Fredrik
 
  • #42
Fra said:
This means that the data storage can not be a simple time history. It would be far too inefficient, given that the memory structure is limited. This is why patterns must be found and then a compressed, and truncaced form of the history must be stored. This contains many interesting problems, such as decision problems as to what data to discard.

This is also the sense in which one can realize the prior information, implicitly carried by an evolved structure. It is a compressed form of coded history. Far more efficient that a time history.

So emergent structures, can abstractly be seen as evolved compressed (lossy compression of course) histories.

/Fredrik
 
  • #43
Thanks, Fra. I looked at Holger Nielsen's Random Dynamics with some interest. I would say, ignoring that I might be wrong about whether Lie random fields are interesting, that he's somewhat stuck in a curious mix of classical and quantum thinking, and that he might find random fields to be a beginning of a way out. The similarity of names is not quite completely irrelevant. I've e-mailed him to see if we might have a conversation.

I see I've set you off with my talk of data compression! I can see it all away in the sky ahead, and I quite like it, but I don't see it leading to mathematics. I don't think we can make Physics formally into a compressed sensing mathematics, unless we already have a UUT (Universal Unified Theory --- from me satire, from them serious). I should have said that it's a stretch to say that all of Physics [can be understood formally as] a compressed/ive sensing process, even though it may be very productive indeed to set up a given experiment as a compressed sensing process.
 
  • #44
I'm glad you found that interesting. About wether he's stuck somewhere, I'd have to read his reasoning again to even comment. I read some of it long time ago, and remember that it did make an impression, enough though there was quite a lot left to do.

Peter Morgan said:
I see I've set you off with my talk of data compression! I can see it all away in the sky ahead, and I quite like it, but I don't see it leading to mathematics. I don't think we can make Physics formally into a compressed sensing mathematics, unless we already have a UUT (Universal Unified Theory --- from me satire, from them serious). I should have said that it's a stretch to say that all of Physics [can be understood formally as] a compressed/ive sensing process, even though it may be very productive indeed to set up a given experiment as a compressed sensing process.

Ok, maybe it's a bit of a strectch, but I'm ready to stretch! and I do see great possibilities to get mathematics out of this :) But I can tell that the more I've beeng thinking about this, the less of a stretch is really is. More than anything, it is a new way of reasoning! And a very intuitive one at that, once you get into it.

But it's not going to be easy, and it's not going to be in the standard form of initial value problems and laws. The creation of the mathematics, would itself by part of the equations, just like in true evolution. In this picture, there is full symmetry betwen initial condition and initial law. They are treated on the same footing.

Sorry about the excitation.

/Fredrik
 
  • #45
Peter Morgan said:
I don't think we can make Physics formally into a compressed sensing mathematics, unless we already have a UUT (Universal Unified Theory --- from me satire, from them serious).

For me there is a simple comfort to such worries. If that is really how nature does it, and to me there is overwhelming indications from many fields, enough to make it somehow very plausible - So why couldn't I "copy nature", which is my only master, and do it as well? If nature can do it, so can I. Like humans have copied natures solutions as long as we can remember. So nature has then successfully proliferated it's logic to some of it's part (me for example), who repeat it. That's another angle to an evolutionary view. Proliferation doesn't necessarily need black holes spawning new babies, although that may of course be correct in parallell, there is no conflict there I can see. So mabe compression algorithms, copies itself by induction. It's way of reasoning, and compression, is encoded in it's interaction properties, so the interaction process could is itself, the process of reproduction.

But regarding formal systems, you have a good point and I agree. This is why I think what we are looking for is a new form of systems, and new mathematics. Somehow the construction of the UUT, will never be complete (I agree), but it's progression should somehow be entangled with this quest.

/Fredrik
 
  • #46
Fra said:
But regarding formal systems, you have a good point and I agree. This is why I think what we are looking for is a new form of systems, and new mathematics. Somehow the construction of the UUT, will never be complete (I agree), but it's progression should somehow be entangled with this quest.
/Fredrik
Of course I only think random fields might be a way point, close enough to QFT to be useful for constructing certain kinds of useful models without too much pain, but different enough to give a broader understanding and perspective that might lead into another, potentially very different mathematics.
 
  • #47
Peter Morgan said:
Of course I only think random fields might be a way point, close enough to QFT to be useful for constructing certain kinds of useful models without too much pain, but different enough to give a broader understanding and perspective that might lead into another, potentially very different mathematics.

Yes, this is a good reason for sure. My personality is such that I find that not focusing on what I think is the way to go, even if it seems like an overwhelming task (which it sometimes does) is more painful. To me beeing out of focus is stressful, more than beeing exposed to an overwhealming task. As long as I think the quest is right, I am calm, even if the road looks dark. To find out I've solve the wrong problem would be a high stress factor for me. So I prefer to try to solve maybe a fraction of what I think is the right problem. This is why I spend so much time reflecting over what is technically very basic, but the basics is after all your foundation for the rest too. This is not something you can afford to do as a professional I think, if you have to publish papers at a given rate.

I read your random field paper the last time, but like I think I commented the last I my first hangup was on the lack of description of the indexing process which I think it's due to me my own twisted personal perspective. I think I'l like to also see the index of the randomfields, as a random process, then your ideas would be more coherent (from my strange perspective). I really appreciate that you seem to have an original approach, this is great IMHO. I think more diversity, independent and and original thought is needed. So I enjoyed your papers from that respect. Unfortuantely due to my own preferenecs I have not so much directed comments.

/Fredrik
 
  • #48
Fra said:
I read your random field paper the last time,
Fra, there's a new random field paper on arxiv.
 
  • #49
Last edited by a moderator:
  • #50
Peter Morgan said:
Thanks Marcus.

Fra, I was going to PM you to say so. It's on today's arXiv, and there's a https://www.physicsforums.com/showthread.php?p=2123793",kindly set up by Marcus.

Great. I must have missed that thread. Thanks, I'll check your paper later and probably also skim your previous paper to refresh.

/Fredrik
 
Last edited by a moderator:
  • #51
The random dynamics approach is very interesting to me. Can anyone point to others who are taking a similar philosophical approach?

Nielsen is saying that the universe emerges as an averaging over cannonical randomness in some sense. So from utter "hot" complexity we have a descent towards coldest simplicity.

Traditional thinking is that the initial conditions for a universal theory is the cold simplicity of a symmetry (a symmetry that gets broken in a cascade of dis-unifications). Nielsen's approach suggests that the initial conditions are instead what others would call a state of vagueness - a chaos of all possibilities. This also would look "symmetrical" in some lights. But is different in being an ultimately complicated state (if disorganised, and so not complex in the complex systems sense).

Note that it would stand in clear opposition to the more popular multiverse approach. One says that from simple components, a multiplicity of worlds must be spawned. The other argues that from every kind of component, one average system must emerge as the dynamic "thermal" equilbrium balance.

Anyway, who else may be following an "averaging over complexity, averaging over pure randomness" approach to universe origins?
 
Last edited:
  • #52
apeiron said:
Anyway, who else may be following an "averaging over complexity, averaging over pure randomness" approach to universe origins?

I am not sure exactly what in that approached you like the most, and it was some time since I went trough it myself, but my impression is that there is surprisingly little done in this direction as compared the the larger research programs.

One obvious problems is that some of the new lines of reasoning is so radical that a lot of the standard notions which the well tested standard model rests on needs to be reconstructed, which means this is cumbersome. So for sure, in the short run it seems easier to find a tweak to the current models, that to aim for a total reconstruction.

There are some people doing different things, but which has elements of what I think is new thinking, but there isn't yet a coherent direction that I know of.

Like Niels mentions in his description, a mainstream idea is to look for symmetries under which the physical actions are invariant, and then as we do so, identify new forces and particles. Then unification usually means looking for the bigger über-symmetries that unify all previously known symmetries and provide unification.

I think this is a absolut twisted way of doing it, I think it's flawed. Holger seems to also object this this general way of reasoning. If you ponder how to infere this über-symmetry, from a plain communication, then I think it's quote clear that a different angle is needed. The symmetry must be emergent.

Holger talks about the random Planck scale, and I might reason differently by one idea is that at the say "planck-scale", there maybe be so chaotic, that no observer can SEE the fully symmetries. The symmetry is only seen by a massive observer.

Instead of starting with symmetries, pictures mathematically without clear physical basis, I think we can start with "random actions", and from this picture, the complexity scale LIMITS the POSSIBLE symmetries. Complexity scale and energy scale are related, sometimes inversly, it depends on where you picture the observer is. ie. if you talk about plackscale, are you talking about a human lab probing the Planck scale in an future hyppthetical accelerator (which won't happen soon anyway) or are we picturing an actual observer living in the Planck scale. My starting point is tha latter, and then the random action is the decent abstraction.

To me there are two main points here.

1) To emphasise the "inside view", to a larger extent. In a certain sense the traditional symmetry arguments is really an "external view" of how the "inside views" RELATE, by symmetry transformations. But not that this implies a third observer, a birds view. This is not a coherent analysis IMHO.

Someone who seems to start to take this more seriously is Olaf Dreyer. Look him up. He has sometthing he calls internal relativity. He has some good way of reasoning IMO.

Rovelli's Relational quantum mechanics paper also has some good start, but rovelli gives it a different turn which IMHO breaks the coherence. But the initial reasoning is good.

The birds view must go away. It's an ugly duckling :)

2) The evolutionary perspective, that we must stop pretending that realistic models can be made on closed systems. To model open systems, evolving models are needed Ithink. Here Lee Smolin with his CNS, and several of this talks presents many good arguments in favour of this.

To me, these two ideas marry very well. And I think their combination is the key. But together the imply a dense reconstruction of what most physicists consider "basic formalism". So it's probably a long term project.

/Fredrik
 
  • #53
Fra said:
Someone who seems to start to take this more seriously is Olaf Dreyer. Look him up. He has sometthing he calls internal relativity. He has some good way of reasoning IMO.

I noticed before, which could be appropriate to mentione again here that Olaf Dreyer even submitted a paper to this time contest.
http://fqxi.org/community/forum/topic/375

Unfortunately I haven't had time to read that yet, but I iwlll

(Just to get back on topic here, sorry for the diversion)

/Fredrik
 
  • #54
apeiron said:
Nielsen's approach suggests that the initial conditions are instead what others would call a state of vagueness - a chaos of all possibilities. This also would look "symmetrical" in some lights.

This is in line with my thinking too. The symmetry is restored in my picture since the MEASURE of symmetry, only makes sense in terms of OBSERVED symmetry, and when this is constrained to a Planck scale OBSERVER, then any assymmetry would not be distinguishable.

From the point of view a true inside observer, simplicity is restored in the middle of the chaos. A human designed accelerator probing the Planck scale doesn't quality as an inside observer. Because the whole measurement setup, is rigidly attached to a massive environment, that is largely controlled. Some logic that works fine in such cases, may not work the the inside observer scenario.

Cosmological scenarios is more like the inside observer case.

This is why the treatise of statistics, and probability must be drastically different in particle physics as compared to say cosmological theories.

But not only that, I think that acknowledging the inside view, is also they key to understand the LOGIC even of the standard model of particle physics. Ie WHY does the actions look like it does, why do we see this mass spectrum and not anything else.

After all, from the point of view of the particles themselves, they are inside observers totally unaware of human accelerators. I expect the rational behind the laws of particle physics to exists in that view.

So the inside view, does not only become relevant for cosmology. I am personally more interesting in it's applications to the logic of the laws of physics in general. Including the tuning problems etc.

As far as I currently see I think an evolving universe with evolving laws is the only reasonble solution. But evolving universe on cosmological scale such as black hole reproduction, and the reproduction, copying and evolution of "compression" algortighms that encode the action of microphysics really are another perspective of the same process.

/Fredrik
 
  • #55
Fra said:
Like Niels mentions in his description, a mainstream idea is to look for symmetries under which the physical actions are invariant, and then as we do so, identify new forces and particles. Then unification usually means looking for the bigger über-symmetries that unify all previously known symmetries and provide unification. I think this is a absolut twisted way of doing it, I think it's flawed. /Fredrik

There must of course be something in this because we observe broken symmetries and so feel naturally that in repairing them, we will get back towards their original source.

But there are still two radically different ways that this same observed outcome could be achieved. There may have been one unbroken most general symmetry that crumbled in a cascade of symmetry breakings to reach its lowest energy incarnation. Or the different thought. there could have been an unformed chaos of potential (hand-waving here for the moment) which fell into all available symmetry "slots". Bit like a rain of pachenko balls getting hung up at various available levels.

So the first way of looking at it would demand some super-symmetry. All the observed symmetries of the standard model would have to fold back neatly into the one E8xE8 or whatever. And we know that project is not working so well.

The second way would instead say that all the various symmetries would not have to be related in this fashion. They would not have to heal to one super-symmetry. Instead, they culd just be the collection of symmetries that (somewhat platonically) exist as resonant forms. And any cooling of chaotic potential would congeal in the various slots to various degrees.

This is a very untechnical description but it is a sketch of an alternative story that could result from taking an approach along the lines Nielsen seems to be exploring.

You say the symmetry must be emergent. This other view would be saying the broken symmetry - the crisp result - is emergent! And the ultimate symmetry never existed. The evolution (or rather development) is from the formless to the formed.

Now you would have to call that a very twisted view. But it is like a phase transition approach. Symmetry is discovered waiting as vapour cools to water then cools to ice.


Fra said:
1) To emphasise the "inside view", to a larger extent. In a certain sense the traditional symmetry arguments is really an "external view" of how the "inside views" RELATE, by symmetry transformations. But not that this implies a third observer, a birds view. This is not a coherent analysis IMHO. /Fredrik

I'm not clear here whether you are thinking along my lines here or not.

But, anyway I have come across the internalist argument in a different context.

https://webmail.unizar.es/pipermail/fis/2005-January/000868.html

Fra said:
2) The evolutionary perspective, that we must stop pretending that realistic models can be made on closed systems. To model open systems, evolving models are needed Ithink. Here Lee Smolin with his CNS, and several of this talks presents many good arguments in favour of this.
/Fredrik

Here I agree completely that the open systems perspective would be more fundamental. My own background is in neuroscience and theoretical biology. So I am most interested in the areas where physics is starting to learn from biology and complex systems thinking.

Please don't take offence but Smolin takes perhaps the wrong part of biology. The focus should be on theories of development and self-organisation rather than theories of evolution and selection. And Nielsen is more in the development camp I feel.
 
Last edited by a moderator:
  • #56
Fra said:
From the point of view a true inside observer, simplicity is restored in the middle of the chaos. /Fredrik

Now this echoes my own preferences more clearly. For cites on those considering the quantum realm as a vagueness:

Chibeni, S. S.: 20004b, ‘Ontic vagueness in microphysics’, Sorites, n. 15 (December 2004), 29-41.
French, S & Krause, D [2003] 'Quantum vagueness', Erkenntnis 59, pp. 97-124.
Lowe, E J [1994] 'Vague Identity and Quantum Indeterminacy', Analysis 54, pp. 110-4.
 
  • #57
apeiron said:
There must of course be something in this because we observe broken symmetries and so feel naturally that in repairing them, we will get back towards their original source.

There is nothing wrong with symmetry arguments in itself. My objection is however, the mathematization or the arguments, where symmetry arguments have more to do with mathematical beauty than it has to do with science, or as I think of if, foundation in communication processes.

I agree symmetries are important. But it's how they are treated. (Like I noted, I choose to focuse on intrinsically "observed" symmetries, and in this picture, generaly symmetries are evolving and thus emergent.

It's this mathematical beauty trend, that I simply don't get. It's too much toyery and I find no faith in the methodology. It's not the symmetry itself that I dislike. On the contrary do I expect the symmetry to be physical, not just mathematical. And physical to me means attached to interaction processes and thus be infered from the interaction/communication process.

apeiron said:
You say the symmetry must be emergent. This other view would be saying the broken symmetry - the crisp result - is emergent! And the ultimate symmetry never existed. The evolution (or rather development) is from the formless to the formed.

Now you would have to call that a very twisted view. But it is like a phase transition approach. Symmetry is discovered waiting as vapour cools to water then cools to ice.

It's not at all twisted to me. I think this is a less speculative view. Ie. the symmtries are emergent as evidence is acquired, and formed. Not speculated, external ad hoc conjectures.

The phase transition analogy is decent, but that would be how an external observer describes it. The inside observer does not SEE the phase transition process, it just sees an emergent symmetry, not the mechanism behind it.

To me the principle of minimum speculation is a guiding principle. But without illusions of universal simiplicity. It's the evolution that flows according the the principle of least speculation.

apeiron said:
I'm not clear here whether you are thinking along my lines here or not.

But, anyway I have come across the internalist argument in a different context.

https://webmail.unizar.es/pipermail/fis/2005-January/000868.html

thanks, I'll chekc this later!

apeiron said:
Here I agree completely that the open systems perspective would be more fundamental. My own background is in neuroscience and theoretical biology. So I am most interested in the areas where physics is starting to learn from biology and complex systems thinking.

Please don't take offence but Smolin takes perhaps the wrong part of biology. The focus should be on theories of development and self-organisation rather than theories of evolution and selection. And Nielsen is more in the development camp I feel.

I have no _formal_ background in bio at all, but I have studied some biochemistry and molecular biology as side projects and I am also very much inspired but this. I agree with you that in some respects, physicists seem to have a lot to learn from biology and complex living systems.

I also agree with you about Smolin :) I like him, because he is one of a few physicists that reasons in a way to my liking, but that doesn't mean I like everything. I do not. But as always, take what you like and leave the rest, if that makes your own coherent soup :)

/Fredrik
 
Last edited by a moderator:
  • #58
Fra said:
There is nothing wrong with symmetry arguments in itself. My objection is however, the mathematization or the arguments, where symmetry arguments have more to do with mathematical beauty than it has to do with science, or as I think of if, foundation in communication processes.

Symmetry in physics, is more like a hidden way of reasoning by adding specualtive constraints. And there is nothing wrong with this, but if we stop pretendening it's sometihng it's not, and instead try to understand what it's pure nature is, it will be even more powerful.

If you look at symmetry as a statement, containing information about a system. One realized that there is a massive amount of information implicit in a symmetry assumption. If you add this to the idea that information capacity is limited by energy and mass, then it's not really a stretch to see that the confidence a symmetry, seen as an acquired information, scales with the information capacity.

So the flat ideas of universal symmetries that gives us "constraints for free" lacks physical basis IMHO. The reason why this happens to work fine, so far, is related to smolins argument he made in a talk on time, and evolving laws. (available as mp3 somehwere, I saved the mp3 but forgot from which talk it came). the difference is thte inside vs outside observer. Closed subsystems vs open systems.

Edit: This is almost like the "spherical cow" metaphor. Who has not seen the typical physicists abstraction: consider a closed system, with initial conditions, ...

Now, all premises must have resulted from experience, or communicatin processes. One might certainly question, how does the process look like, whereby you conclude tht a system is closed. And is the confidence in that conclusion infinite? (It's certainly not.) So closedness as it, either it's closed or itns not, ins't even a valid premised. I think there is no physical process whereby you can arrive at that premise.

You can THINK that the system is closed, and act accordingly. This we do all the time. But this is something different. It means that the action should reflect this uncertainty, and not run into hard inconsistencies if the premises were wrong. Instead it should respond promptly to feedback that are in contradiction to the prior premise, revise and evolve. The ridigly constructed models, in the past spirit rarely allow such actions. Instead it would be considered and inconsistency and run into a HALT. I have yet to see nature run into a HALT.

/Fredrik
 
Last edited:
  • #59
Fra said:
I noticed before, which could be appropriate to mentione again here that Olaf Dreyer even submitted a paper to this time contest.
http://fqxi.org/community/forum/topic/375

Unfortunately I haven't had time to read that yet, but I iwlll

(Just to get back on topic here, sorry for the diversion)

It's clear that Dreyers has good ambitions and that he has yet a lot to do. But nevertheless some of this basic motivation is something that I fully share. I think I may differ in a few ways, but regardless of how he arrived at the position(something that I can't read out of his papers), he mentions a few keys that makes perfect sense to me and is right in line with how I think.

"The problem of time arises because the metric is distinct from matter fields. In this essay we argue that it is this split between geometry and matter that is to blame for the problem of time together with a number of other problems."

As he argued previously, matter and geometry must emerget together. Their separation is artifical, and this artifical split is the root of several problems. I personally fully agree with this.

The point where I am not sure what he means is this

"What then is the nature of time that we are proposing? In internal relativity time appears in two different ways. The first notion of time would be completely familiar to a Newtonian physicist. This time is the background time of the theory. The second notion of time would be less familiar to a Newtonian physicist. It arises because the presence of a background time does not imply that an observer has access to it. Rather, the behavior of matter determines what the observers will measure. As we have seen this means that the internal time as it is measured by an observers differs from the background time."

The fact that he acknowledges the importance of what inside observers measures is good, and the most important thing. But I have a feeling that he misses what I think is a point. What worries me is that he intendts to use an nonaccessible background structure, to explain internal relations. That is somehow in despite of his great ambitions dangerously close to the standard view (although he reformulates it), so I wonder if he will solve anything by this. But it could be that I misunderstand him there.

He writes that matter determines what the observers will measure, and I agree, though I think of it as matter somehow is the observer, or makes up the observer.

Thus it is completely clear, that never ever will there be an observer, to observer empty space. Because the mere fact that there is an observer, suggest it's not empty. This is why I think the the "pure gravity" approaches are doomed to not possibly be consistent as a proper measurement theory.

I rather think of time, as exisitin only within an observer history. Even though he also says that what is important is the observed time and space, I do not think or like the idea that it has to be expressed in terms of an "inaccessible" external structure. Not just for esthetical reasons, but because this very image also needs a physical basis.

The relation between the inside views, is again, only speakable from an inside view. Correlation between different clocks are then just emergent. Maybe this is what Dreyer mans with "not accesible external time", as in a non accesible emergent symmetry. If so we are quite close.

I'm excited to see any of Dreyers future papers. I think his ideas are interesting. The best of the essays I've read, but I haven't gotten around to read all unforunately.

/Fredrik
 
  • #60
apeiron said:
The second way would instead say that all the various symmetries would not have to be related in this fashion. They would not have to heal to one super-symmetry. Instead, they culd just be the collection of symmetries that (somewhat platonically) exist as resonant forms. And any cooling of chaotic potential would congeal in the various slots to various degrees.
...

Please don't take offence but Smolin takes perhaps the wrong part of biology. The focus should be on theories of development and self-organisation rather than theories of evolution and selection. And Nielsen is more in the development camp I feel.


From my perspective, the key issue is how information is physically defined / measured / communicated. We know that the world does in fact do this, because of course we all actually experience a tremendous amount of information in our physical environment. But physics has not traditionally explored how a self-communicating system might work -- i.e. provide ways to define and communicate every one of its own elements in terms of other measurable elements.

If we knew how to describe the world this way, as a self-defining communications system, then perhaps we could envision the original universe as something like a quantum vacuum, where anything at all can and does happen -- but in such a lawless and structureless environment there would be no way for anything that happens to make a difference to anything else. So in effect there is nothing, all events remain merely "virtual."

But we could imagine that within this chaos there happen to be any number of different networks of interlinked events... and that among these could be a network that happens to be able to define and communicate something, internally. And then that minimal shared definition might be the basis for a sub-set of that network of events, that could define more complex and specific information within the framework established by the first.

I don't know whether this will be at all evocative to you -- I hoped to say more, but my son just arrived for a visit unexpectedly and my morning plans must change. Anyway, talk about hand-waving!...

But this relates to the question of symmetries in the following way. A symmetry basically means, we can define X (say the circumference of a circle) without needed to specify Y (any specific point on the circle). So wherever there is a symmetry, it's pointing to one type of information being potentially more fundamental than another, i.e. needing less of the structure of the physical world to be meaningfully definable and communicable.

So if the world evolved through a sequence of levels -- i.e. networks of events capable of defining more and more information, given the informational structure established by lower levels -- then that would manifest itself in a highly evolved structure containing many different kinds of symmetry.


I have my own prejudices about "self-organization" vs "evolution" in biology -- but will have to wait. Oh yes, and this thread is about time in physics! Will get back to that later too.

Thanks -- Conrad
 
  • #61
ConradDJ said:
If we knew how to describe the world this way, as a self-defining communications system, then perhaps we could envision the original universe as something like a quantum vacuum, where anything at all can and does happen -- but in such a lawless and structureless environment there would be no way for anything that happens to make a difference to anything else. So in effect there is nothing, all events remain merely "virtual."
Conrad

This approach is 1) ontic vagueness coupled with 2) pansemiosis. And you would find it expressed well in the philosophy of CS Peirce, the logician, semiotician and founder of pragmatism.

quoting Peirce...
"If we are to proceed in a logical and scientific manner, we must, in order to account for the whole universe, suppose an initial condition in which the whole universe was non-existent, and therefore a state of absolute nothing."
"But this is not the nothing of negation. . . . The nothing of negation is the nothing of death, which comes second to, or after, everything. But this pure zero is the nothing of not having been born. There is no individual thing, no compulsion, outward nor inward, no law. It is the germinal nothing, in which the whole universe is involved or foreshadowed. As such, it is absolutely undefined and unlimited possibility -- boundless possibility. There is no compulsion and no law. It is boundless freedom.
"Now the question arises, what necessarily resulted from that state of things? But the only sane answer is that where freedom was boundless nothing in particular necessarily resulted. I say that nothing necessarily resulted from the Nothing of boundless freedom. That is, nothing according to deductive logic. But such is not the logic of freedom or possibility. The logic of freedom, or potentiality, is that it shall annul itself. For if it does not annul itself, it remains a completely idle and do-nothing potentiality; and a completely idle potentiality is annulled by its complete idleness. (CP 6.215-219)
"I do not mean that potentiality immediately results in actuality. Mediately perhaps it does; but what immediately resulted was that unbounded potentiality became potentiality of this or that sort -- that is, of some quality. Thus the zero of bare possibility, by evolutionary logic, leapt into the unit of some quality. (CP 6.220)
"The evolutionary process is, therefore, not a mere evolution of the existing universe, but rather a process by which the very Platonic forms themselves have become or are becoming developed. (CP 6.194)"
"[W]e must not assume that the qualities arose separate and came into relation afterward. It was just the reverse. The general indefinite potentiality became limited and heterogeneous. (CP 6.199) The evolution of forms begins or, at any rate, has for an early stage of it, a vague potentiality; and that either is or is followed by a continuum of forms having a multitude of dimensions too great for the individual dimensions to be distinct. It must be by a contraction of the vagueness of that potentiality of everything in general, but of nothing in particular, that the world of forms comes about. (CP 6.196)
"Out of the womb of indeterminacy we must say that there would have come something, by the principle of Firstness, which we may call a flash. Then by the principle of habit there would have been a second flash. Though time would not yet have been, this second flash was in some sense after the first, because resulting from it. Then there would have come other successions ever more and more closely connected, the habits and the tendency to take them ever strengthening themselves, until the events would have been bound together into something like a continuous flow. (CP 1.412)
end quote...
 
  • #62
Last edited by a moderator:
  • #63
Marcus, where could I best initiate a discussion of concepts presented in these papers? For example, the argument that time is a result of motion...it seems to me that the idea of motion already includes time, and leads to a circular argument when motion is used to define time.

And ephemeral time. Instead of using a collection of motions of stellar objects to fix time, why not simply use the furthest observable object as a fixed direction? We can now see objects so distant that they are for all practical measurable purposes constant. So the length of a day relative to that fixed direction is a constant, and also fixes sidereal time, since such a distant object could be used to make a direct correlation between sidereal and diurnal time.

I don't have anyone to talk to about these things and discussion is often the key to understanding...
 

Similar threads

Back
Top