So what about the FQXi time essay contest? It's February already.

  • #31
I won't repeat my inchoate comments on timelessness. Not exciting to do it again.

Christine, how long someone has thought about a subject makes rather little difference to the novelty of their thought. I'm sorry that the "I've been doing this for a long time" slipped into my comment. If a good mathematician ever applies themselves to random fields, they will be past my ten year head start in a week.
 
Physics news on Phys.org
  • #32
Peter Morgan said:
I won't repeat my inchoate comments on timelessness...

Not so inchoate! :biggrin: I will look back at your comments specifically as regards the main points of the "Forget time" essay. The gist of the essay is summed up in the conclusions section at the end:
==quote==
I have presented a certain number of ideas and results:

1. It is possible to formulate classical mechanics in a way in which the time variable is treated on equal footings with the other physical variables, and not singled out as the special independent variable. I have argued that this is the natural formalism for describing general relativistic systems.

2. It is possible to formulate quantum mechanics in the same manner. I think that this may be the effective formalism for quantum gravity.

3. The peculiar properties of the time variable are of thermodynamical origin, and can be captured by the thermal time hypothesis. Within quantum field theory, “time” is the Tomita flow of the statistical state ρ in which the world happens to be, when described in terms of the macroscopic parameters we have chosen.

4. In order to build a quantum theory of gravity the most effective strategy is therefore to forget the notion of time all together, and to define a quantum theory capable of predicting the possible correlations between partial observables.
==endquote==

I don't feel I can paraphrase your views at all adequately, Peter. But I am guessing that the only engagement is with points 1. and 2. We can set aside the other two because either they are more speculative or they have to do more with ideas about constructing a quantum theory of gravity.
Looking just at 1. and 2. I would guess that you wouldn't object to point 1. That doesn't involve probabilities arising from quantum amplitudes. It is purely classical, and just says a timeless formalism is possible and (in the case of GR) natural.

Point 2. is where you might strongly disagree, unless I'm mistaken. You might grant that it is possible to formulate a quantum mechanical system in the timeless way proposed, depending on what one expects to get out of doing this. But you might question the fundamental validity, or the practical point, of doing that. You especially emphasized probabilities. In Rovelli's setup one can, I believe, repeat the same experiment over and over, so one can accumulate empirical probabilities. The probabilities describe correlations between observables.
One can repeat the experiment, varying only a few parameters, or no parameters.
Perhaps I am mistaken, and you will disagree.

To me the setup looks pragmatic and empirical. This may not be so from your perspective.
That is where I sense possible disagreement.
===================

I just read the article by N.P. Landsman you linked to:
http://www.math.ru.nl/~landsman/EBpubl.pdf
great quotes and a fascinating writer---too much to assimilate in one sitting, but too interesting to stop reading.
 
Last edited:
  • #33
"All known physical (relativistic and nonrelativistic) hamiltonian systems can be formulated in this manner." Non-Hamiltonian systems cannot be formulated in this way, but all Hamiltonian systems can be.

Only if the model is of everything in the universe should the model be conservative, and hence Hamiltonian. A truly fundamental model does model absolutely everything in the universe, no degrees of freedom at any scale left out, the map really is the territory, so that would be alright. If, for trivial example, there's a fractal structure, turtles standing on turtles all the way up and down, with the turtles at different scales always of a different kind, I think it's problematic.

I find the approximate relationship between ideal models and experimental results, which is all we've ever had in the past, very different from a discussion of ultimate, perfect models.

I left a comment on Rovelli's FQXi paper, which he answered but I didn't have the will or the skill to pursue, more-or-less to the effect that a von Neumann algebra, which is required to be able to construct a Tomita flow, is too potent an analytic structure to introduce without far more justification than I think he gave. [In part, I so liked the way that he responded, and that he responded so nicely to other people too, that I didn't want to rain on his parade.] Particularly I don't think we can introduce such a potent analytic structure in the same breath as a Hamiltonian formulation of quantum field theory, which can be constructed as an interacting model only in an ill-defined way, at least as far as we know. We could note, though this is not at all definitive because we can manage unbounded operators if we are very careful about their domains, that the Hamiltonian is never a bounded operator, whereas a von Neumann algebra only contains bounded operators. We can calculate renormalized Wightman functions in perturbation theory that for some experiments match to many orders of magnitude, but introducing an analytic structure is a very different game. The big conclusion in his FQXi paper, the application of Tomita flow, is very flawed, in my opinion, though it conceivably might be justifiable nonetheless.

I think I disagree with 1, 2, and 3, making 4 moot.

I dunno, Marcus. My very flawed attempt at a constructive mathematics, in the easy territory of quantum field theory, is very different from Rovelli's constructive approach, in the much harder terrain of quantum gravity, so I'm very little competent to engage with this.
 
  • #34
...I left a comment on Rovelli's FQXi paper, which he answered... [In part, I so liked the way that he responded, and that he responded so nicely to other people too, that I didn't want to rain on his parade.]
You keep pointing me in interesting directions. First the Landsman paper, and now you gave me some motivation to read the comments and discussion of Rovelli's paper at FQXi:

http://fqxi.org/community/forum/topic/237

I was just looking at the 13 October post where he replies to a bunch of comments.
"...In any case, I am aware that the thermal time hypothesis is highly speculative. I would like the readers to keep it separate from the main idea defended in the essay, which is that mechanics can be formulated without having to say which variable is the time variable..."
The thermal time and the Tomita flow business are indeed speculative and seem secondary to his main idea.

Ah! I see your comment at 16 October and Rovelli's reply of 19 October which begins"
"...Peter Morgan raises an extremely good issue, with both a technical and a conceptual side. I refer here to his post above, without trying to repeat here his points, since these are several, interconnected, and nicely expressed by Peter..."

Wow! There is some remarkable material in these comments which I had no idea was there! There is a comment from the Other Peter (Peter Lynds) of 22 October, and Rovelli's 24 October reply which sheds light on his personal view of LQG

==quote==
Dear Peter,

thanks for rising this key point. You say: "Are you not assuming the existence of time by asserting that time (and space) are quantized, and come as minimum, indivisible atoms in Loop Quantum Gravity"? Very good point. Here is what I think:

Einstein great discovery, of course, is that the two things are in fact the same. The two things are: on the one hand, the gravitational field, and on the other the two "entities" that Newton put at the basis of his picture of the world, and called "space" and "time". Now, when you discover that mister A and mister B are the same person, you can equally say that mister A is in reality mister B, or that mister B is in reality mister A. Books like to say that the gravitational field, in reality, is nothing but the spacetime, which happens to curve and so on. I prefer the opposite language: namely that the entities that Newton called "space" and "time" are nothing else than the gravitational field, seen in the particular configuration where we can disregard its dynamical properties, and assume it to be flat. The choice is not just a choice of wording. My understanding is that the deep discovery of Einstein with general relativity is not that the gravitational field is very special, but, the other way around, that it is just a field on the same ground as the other fields. The key novelty with respect to pre-general-relativistic physics is that all these fields do not live "in" spacetime: they live, so to say, "on top of one another". (In fact, I think that this was also Einstein's view. He writes for instance "Spacetime does not claim existence on its own but only as a structural quality of the [gravitational] field", in "Relativity: The Special and General Theory", page 155.) So, I think that the clearest way of thinking about general relativity, or, more precisely, the general relativistic theory that , at best as we know, describes our world, and which includes the gravitational field and all the other physical fields, is to view it as a theory of interacting fields, without any need of making reference to space and time. What we have is observable quantities that are functions of these fields.

Now, from this point of view (which is mine), the "atoms of space" of loop quantum gravity are truly just quanta of the gravitational field. The reason we call them "quanta of space" is only because we use to call "space" the quantity measured by a meter. But a meter only measures the gravitational field. And the same with time and a clock. The reason we keep talking about "space" and "time" in loop quantum gravity is only because these are traditional names for indicating aspects of the gravitational field. But these names are ill-used, if we assume them to carry all the heavy ontological significance of Newtonian space and Newtonian time. They represent observable variables (measured by clocks and meters), on the same ground as many other quantities observed in nature.

This is why I think that in order to have a clear picture the easiest thing is to "forget space" and "forget time", and only to talk about relations between observable quantities. The "atoms of space" and the "atoms of time" of LQG are only figures of language, to indicate that certain physical observables aspects of the gravitational field have a discrete spectrum.

I am very glad you have raised this point.

Carlo Rovelli
===endquote===
 
Last edited:
  • #35
Peter, I can see why you might have wanted to exercise restraint and not "rain on the parade". The way Rovelli is handling the questions is a class act. Nice and clear at the same time. Here's a 9 November post that clears up a problem people often have.
==quote==
Dear Bob,

thanks for the question, which is very appropriate. Let me give a dry answer first, and then explain:

> In the timeless picture you propose there is no unitarity, right?

Right: more precisely, there is no unitarity in the usual sense.

> Does this mean that probability conservation can be violated?

No: probability conservation is not violated.

Let me explain. In usual quantum theories, unitarity is the request that the change of the state *in time* is given by a unitary operator. It follows that probability is conserved *in time*. In a theory in which there is no preferred time variable, this request obviously looses its meaning. This is why unitarity in the usual sense is not present in the timeless formulation. Nonetheless, probability must be "conserved". This means that the probabilities of all the possible specific-measurement's outcomes predicted by the theory must sum up to one. Unitarity in *this* sense must of course be implemented by the timeless theory, and it is.

The answer is different in the statistical context. In this context, thermal time emerges, and therefore we have a unitarity requirement again. In this case, the evolution in thermal time turns out to be unitary by construction.

Thanks also for bringing back the discussion to the actual content of the essay. I do not think that this forum is the proper place for discussing alternative points of view, especially if discussed in other FQXi essays, or issues which are too general.

Carlo Rovelli
==endquote==

Also noteworthy is George Ellis' post of 12 December which challenges several of the points made in the essay. It's fairly long. I'll just post the link again and encourage people to read it.
http://fqxi.org/community/forum/topic/237
George Ellis is a world expert in cosmology, relativity, and foundations. He also submitted an essay to the FQXi contest

This was an amazing contest. It may have had a subtle transformative effect on the conceptual weather. Thanks to Peter Morgan for clueing me to look into those FQXi comments.

George Ellis takes the "time is real" line, and his essay won the second Community prize.
In case anyone is interested I'll get the link to that as well.
http://fqxi.org/community/forum/topic/361
http://fqxi.org/data/essay-contest-files/Ellis_Fqxi_essay_contest__E.pdf
 
  • #36
I haven't follow the entire thread or context due to lack of time lately :( but one of a few things that stick out in the timeless reasoning of Rovelli and others, that I mysteriously doesn't find consistent with this RQM spirit is that he without headache pullls out stuff like a state space, and never ever seem to question how this state space is "communicated". There is some incoherent use of reasoning here, from my perspective.

As Rovelli so excellently argues in his RQM paper, the only way to compare information between observers is by physical interactions.

But what happened to the information implied in notions such as "state space"?

This doesn't make sense to me. I'm much more in liking of the way Smoling argues that state spaces are evolving. And I think the natural way to unite that with the RQM reasoning is to try to describe the physical process, whereby the state space is formed, as seen by a specific choice of observer.

The same goes for the notion of laws, which certainly contains information.

What bugs me more than anything else, is that the observable ideal, seems to oddly apply to some mathematical objects (say states) but not to other things (such as state spaces and laws).

So far smolins evolution ideas seem to be the only sensible resolution. He has some great points regarding inside vs outside observers. The physics as seem by a true inside observer is still missing. Alot of the conclusions seems to have been drawn from experience with physics where one can consider and external observer, and than arguments are made that by analogy this should hold also for the general case. I have very hard to find such arguments convincing.

/Fredrik
 
  • #37
Peter Morgan said:
...When we have a probabilistic or quantum-mechanical theory, we don't even have the individual events in the mathematics, we don't even have statistics, we only have expected values. Of course there are engineering rules for how we should make the comparison between statistics and expected values, and some people are much better at using those heuristics than others...

Hello Peter, it seems you are bothered by the lack of proper link between the abstraction of probability distributions and the more physical stream of individual events?

I recall that from past discussions on your random field paper you seek different solutions, but this is a key problem that I think is at the heart of the problem, and it's what has lead me to find a reconstruction of the notion of probability, which includes a reconstruction of the continuum. The idea would be to consider the emergence of the probability distributions and probabiltiy spaces to be a physical process. Thus the notion that some experiments are effectively just observed once or twice, means that those probability spaces themselves are very volatile if one assigns statistical weights to the probability spaces themselves, and if one takes this to be a physical process of a self-organizing observer, this very process is overall constrained by the complexity of the observer. Thus explaining the two extreme cases of observations, we humans studying a small subsystem in an accelerator vs a human based observatory making observations on the cosmological scale. Such reconstruction, should make explicit the difference in building the statistics properly as the relative complexity of observer vs observed would be a key.

I have a feeling that somehow you were looking for an alternative solution, to this problem.

As far as I understand it, Rovelli explicitly avoids this problem, as he in his book writes in the footnote that he doesn't want to discussing the meaning of probabiltiy. Up until that point, I think Rovelli is very clear.

/Fredrik
 
  • #38
As I think you hinted, this probably is present also pre-QM. And related to the ergodic hypothesis etc. But I think it escaped analysis at that time since I think it's the advent of the higher standards of QM, as a _theory of observables_, that these problems became more obvious. To do QM without facing these questions, is like trying to use measurement gages to chart the world, without for a second considering that it may be relevant to understand the process by which these measurement gages was devised, and that there is even an ongoing revision of these gages.

Ie. it's like a theory of measurement, without a theory of measurement devices used to produce and store the measurement results. It seems the only reasonable explanation why these problems are still around is that it hasn't been faishonable physics?

/Fredrik
 
  • #39
Hi, Fredrik.

I'm not much bothered by the lack of relationship between statistics of events and probability densities in mathematical models. I'm happy enough to be fairly pragmatic about the relationship between models and the world, in what I would call a post-positivist way.

I don't like other people's rather Platonic accounts of the relationship between mathematics and the world, which typically assert a metaphysical interpretation of probability (something like propensities). I don't insist on verification or on falsifiability, which I think are interesting but crude measures of how interesting a theory is, but I'd nonetheless like something with a little more solidity than that. I think Carlo Rovelli wants to be only slightly Platonic, at least compared to some of the other characters in the QG game, so I can't get too excited about him not wanting to expend a lot of effort on not discussing the relationship between statistics and probability.

I see your concern, but I think I'm comfortable with more-or-less glossing the distinction between statistics and probabilities, most of the time, so I can try to address what I take to be more pressing worries. I think that's somewhat different from Carlo Rovelli's position, insofar as my philosophical position is more empiricist.

Earlier today, I read the physics arxiv blog comments on http://www.technologyreview.com/blog/arxiv/23144/" (longer, more worthwhile). What's the relevance to your comments? To do compressed sensing, we do randomly chosen measurements, which allow us to reconstruct a model (prototypically a photograph, but more generally anything that requires a lot of data to be gathered) of the real world. There is a sense in which No Measurement Is Repeated, but all the measurements are instances from a desired class of measurements. Your comment, "Thus the notion that some experiments are effectively just observed once or twice, means that those probability spaces themselves are very volatile", fits quite nicely into this approach. Something that is observed once or twice (ball lightning, perhaps? Supernovae, etc.) is placed into the giant compressed sensing dataset that is everything ever observed by a physicist, out of which pops, with a little gentling, a model that approximates all the data, despite gaping holes in the data. [Of course, it's a stretch to say that all of Physics is a compressed/ive sensing process.] The isolated measurements may have pride of place or not, but in some statistical sense they will fit in the overall pattern.

The detailed algorithmic process of engineering a description, such as compressed/ive sensing, constitutes a pragmatic theory of measurement that is very active. Not that more and different can't be done and be interesting and perhaps even fashionable.

I don't think the distinction between statistics and probabilities "escaped analysis" in classical statistical physics. There was a lot of effort to give a solid grounding to the ergodic hypothesis, for example, as I'm sure you know, but no-one was much satisfied with the strength of the assumptions that had to be made to be able to prove much.

I'm sorry to say that I suspect I haven't engaged much with your argument. I think I got sidetracked at a few points.

Peter.
 
Last edited by a moderator:
  • #40
Peter Morgan said:
I'm not much bothered by the lack of relationship between statistics of events and probability densities in mathematical models. I'm happy enough to be fairly pragmatic about the relationship between models and the world, in what I would call a post-positivist way.

Ah ok, I think I remember now, from the last discussions.

Peter Morgan said:
What's the relevance to your comments? To do compressed sensing, we do randomly chosen measurements, which allow us to reconstruct a model (prototypically a photograph, but more generally anything that requires a lot of data to be gathered) of the real world. There is a sense in which No Measurement Is Repeated, but all the measurements are instances from a desired class of measurements. Your comment, "Thus the notion that some experiments are effectively just observed once or twice, means that those probability spaces themselves are very volatile", fits quite nicely into this approach. Something that is observed once or twice (ball lightning, perhaps? Supernovae, etc.) is placed into the giant compressed sensing dataset that is everything ever observed by a physicist, out of which pops, with a little gentling, a model that approximates all the data, despite gaping holes in the data. [Of course, it's a stretch to say that all of Physics is a compressed/ive sensing process.]

Datacompression is conceptually highly relevant to my thinking and I think I see your point.

I picture self-organization as a kind of re-organisation of data, which effectively is a kind of datacompression, but there is one important difference to normal data compression, that is quite analogous to the human brain. Research has indicated that the human brain does not optimize storage of memories for historical accuracy, instead the memory is reorganized and optimized to be of maximum value in order to be of maximum use to predict the future. This is an interesting aspect. I think the story is similar in physics. The whole point with retaining the past, is that by beeing able to predict the future, the observer can increase it's own odds at "survival" (in the evolutionary picture). Thus some historical accuracy is lost or deformed, for the benefit of more fitness in the future. The brain self-organized and decides HOW to deform and tweak this data, it's an intelligent form of evolving datacompression "algorithm", that is not possible to predict. The evolving picture is critical. This is simliar to Smolins arguments, taken also from biology, that it is in general impossible to replace an evolving configuration space with a large super-space, because it would be so big that the computer necessary to perform even the simplest calculations would not fit on earth!

edit: this kind of conclusions, is what is totally lost when holding a too strong mathematical attitude, the complexity or representation and computations, are IMHO constrained by the physics, and should therefor constrain the mathematics. Hopefully this should help us build not only self-consistent mathematics from the armchair position, but also a mathematics that is consistent with the physical constraints of computation and representation.

This means that the data storage can not be a simple time history. It would be far too inefficient, given that the memory structure is limited. This is why patterns must be found and then a compressed, and truncaced form of the history must be stored. This contains many interesting problems, such as decision problems as to what data to discard.

There is IMHO even very interesting analogies with this decision process and discarding of "useless information", to black hole hawking type radiation. I might well be considered as "useless" - containing no information - from the point of view of the black hole itself, but it might well contain information relative to a large outside observer. This is one think I have in mind, that I think this reasoning can be applied to.

But also the way a system interacts, and how the different forces unifiy, might relate to the datacompression, as having mixed and truncated some more fundamental data stream. The one pixel camera makes me associate to the "one-state", communication channel, it might still possible to create a complex stage image, from driving the compressed and evolving image from such one-state channel. So that in effect, eventually that communication channel might grow.

/Fredrik
 
Last edited:
  • #41
Hey Peter, I don't know if you've seen Holger Nielsen's reasoning that he calls "random dynamics". I admit it was some time since I skimmed that and I don't recall the details and to what extent I share his reasoning, but I remember that he quite ambitious.

The suggestive name he chose, is why I can to think of it here, since you talk about random fields :)

He's a danish physicists at NBI.
http://en.wikipedia.org/wiki/Holger_Bech_Nielsen

His Random dynamic reasoning is loosely lined out at
http://www.nbi.dk/~kleppe/random/qa/qa.html

Among other things he says

"What we take as the fundamental "world machinery", is a very general, random mathematical structure , which contains non-identical elements and some set-theoretical notions."

If I am not mixing this up with someone else, I think the Lee Smolin also briefly mentioned Holger Nielsen in one of his Cosmological Natural Selection papers. But I don't remember which paper.

/Fredrik
 
  • #42
Fra said:
This means that the data storage can not be a simple time history. It would be far too inefficient, given that the memory structure is limited. This is why patterns must be found and then a compressed, and truncaced form of the history must be stored. This contains many interesting problems, such as decision problems as to what data to discard.

This is also the sense in which one can realize the prior information, implicitly carried by an evolved structure. It is a compressed form of coded history. Far more efficient that a time history.

So emergent structures, can abstractly be seen as evolved compressed (lossy compression of course) histories.

/Fredrik
 
  • #43
Thanks, Fra. I looked at Holger Nielsen's Random Dynamics with some interest. I would say, ignoring that I might be wrong about whether Lie random fields are interesting, that he's somewhat stuck in a curious mix of classical and quantum thinking, and that he might find random fields to be a beginning of a way out. The similarity of names is not quite completely irrelevant. I've e-mailed him to see if we might have a conversation.

I see I've set you off with my talk of data compression! I can see it all away in the sky ahead, and I quite like it, but I don't see it leading to mathematics. I don't think we can make Physics formally into a compressed sensing mathematics, unless we already have a UUT (Universal Unified Theory --- from me satire, from them serious). I should have said that it's a stretch to say that all of Physics [can be understood formally as] a compressed/ive sensing process, even though it may be very productive indeed to set up a given experiment as a compressed sensing process.
 
  • #44
I'm glad you found that interesting. About wether he's stuck somewhere, I'd have to read his reasoning again to even comment. I read some of it long time ago, and remember that it did make an impression, enough though there was quite a lot left to do.

Peter Morgan said:
I see I've set you off with my talk of data compression! I can see it all away in the sky ahead, and I quite like it, but I don't see it leading to mathematics. I don't think we can make Physics formally into a compressed sensing mathematics, unless we already have a UUT (Universal Unified Theory --- from me satire, from them serious). I should have said that it's a stretch to say that all of Physics [can be understood formally as] a compressed/ive sensing process, even though it may be very productive indeed to set up a given experiment as a compressed sensing process.


Ok, maybe it's a bit of a strectch, but I'm ready to stretch! and I do see great possibilities to get mathematics out of this :) But I can tell that the more I've beeng thinking about this, the less of a stretch is really is. More than anything, it is a new way of reasoning! And a very intuitive one at that, once you get into it.

But it's not going to be easy, and it's not going to be in the standard form of initial value problems and laws. The creation of the mathematics, would itself by part of the equations, just like in true evolution. In this picture, there is full symmetry betwen initial condition and initial law. They are treated on the same footing.

Sorry about the excitation.

/Fredrik
 
  • #45
Peter Morgan said:
I don't think we can make Physics formally into a compressed sensing mathematics, unless we already have a UUT (Universal Unified Theory --- from me satire, from them serious).

For me there is a simple comfort to such worries. If that is really how nature does it, and to me there is overwhelming indications from many fields, enough to make it somehow very plausible - So why couldn't I "copy nature", which is my only master, and do it as well? If nature can do it, so can I. Like humans have copied natures solutions as long as we can remember. So nature has then successfully proliferated it's logic to some of it's part (me for example), who repeat it. That's another angle to an evolutionary view. Proliferation doesn't necessarily need black holes spawning new babies, although that may of course be correct in parallell, there is no conflict there I can see. So mabe compression algorithms, copies itself by induction. It's way of reasoning, and compression, is encoded in it's interaction properties, so the interaction process could is itself, the process of reproduction.

But regarding formal systems, you have a good point and I agree. This is why I think what we are looking for is a new form of systems, and new mathematics. Somehow the construction of the UUT, will never be complete (I agree), but it's progression should somehow be entangled with this quest.

/Fredrik
 
  • #46
Fra said:
But regarding formal systems, you have a good point and I agree. This is why I think what we are looking for is a new form of systems, and new mathematics. Somehow the construction of the UUT, will never be complete (I agree), but it's progression should somehow be entangled with this quest.
/Fredrik
Of course I only think random fields might be a way point, close enough to QFT to be useful for constructing certain kinds of useful models without too much pain, but different enough to give a broader understanding and perspective that might lead into another, potentially very different mathematics.
 
  • #47
Peter Morgan said:
Of course I only think random fields might be a way point, close enough to QFT to be useful for constructing certain kinds of useful models without too much pain, but different enough to give a broader understanding and perspective that might lead into another, potentially very different mathematics.

Yes, this is a good reason for sure. My personality is such that I find that not focusing on what I think is the way to go, even if it seems like an overwhelming task (which it sometimes does) is more painful. To me beeing out of focus is stressful, more than beeing exposed to an overwhealming task. As long as I think the quest is right, I am calm, even if the road looks dark. To find out I've solve the wrong problem would be a high stress factor for me. So I prefer to try to solve maybe a fraction of what I think is the right problem. This is why I spend so much time reflecting over what is technically very basic, but the basics is after all your foundation for the rest too. This is not something you can afford to do as a professional I think, if you have to publish papers at a given rate.

I read your random field paper the last time, but like I think I commented the last I my first hangup was on the lack of description of the indexing process which I think it's due to me my own twisted personal perspective. I think I'l like to also see the index of the randomfields, as a random process, then your ideas would be more coherent (from my strange perspective). I really appreciate that you seem to have an original approach, this is great IMHO. I think more diversity, independent and and original thought is needed. So I enjoyed your papers from that respect. Unfortuantely due to my own preferenecs I have not so much directed comments.

/Fredrik
 
  • #48
Fra said:
I read your random field paper the last time,
Fra, there's a new random field paper on arxiv.
 
  • #49
Last edited by a moderator:
  • #50
Peter Morgan said:
Thanks Marcus.

Fra, I was going to PM you to say so. It's on today's arXiv, and there's a https://www.physicsforums.com/showthread.php?p=2123793",kindly set up by Marcus.

Great. I must have missed that thread. Thanks, I'll check your paper later and probably also skim your previous paper to refresh.

/Fredrik
 
Last edited by a moderator:
  • #51
The random dynamics approach is very interesting to me. Can anyone point to others who are taking a similar philosophical approach?

Nielsen is saying that the universe emerges as an averaging over cannonical randomness in some sense. So from utter "hot" complexity we have a descent towards coldest simplicity.

Traditional thinking is that the initial conditions for a universal theory is the cold simplicity of a symmetry (a symmetry that gets broken in a cascade of dis-unifications). Nielsen's approach suggests that the initial conditions are instead what others would call a state of vagueness - a chaos of all possibilities. This also would look "symmetrical" in some lights. But is different in being an ultimately complicated state (if disorganised, and so not complex in the complex systems sense).

Note that it would stand in clear opposition to the more popular multiverse approach. One says that from simple components, a multiplicity of worlds must be spawned. The other argues that from every kind of component, one average system must emerge as the dynamic "thermal" equilbrium balance.

Anyway, who else may be following an "averaging over complexity, averaging over pure randomness" approach to universe origins?
 
Last edited:
  • #52
apeiron said:
Anyway, who else may be following an "averaging over complexity, averaging over pure randomness" approach to universe origins?

I am not sure exactly what in that approached you like the most, and it was some time since I went trough it myself, but my impression is that there is surprisingly little done in this direction as compared the the larger research programs.

One obvious problems is that some of the new lines of reasoning is so radical that a lot of the standard notions which the well tested standard model rests on needs to be reconstructed, which means this is cumbersome. So for sure, in the short run it seems easier to find a tweak to the current models, that to aim for a total reconstruction.

There are some people doing different things, but which has elements of what I think is new thinking, but there isn't yet a coherent direction that I know of.

Like Niels mentions in his description, a mainstream idea is to look for symmetries under which the physical actions are invariant, and then as we do so, identify new forces and particles. Then unification usually means looking for the bigger über-symmetries that unify all previously known symmetries and provide unification.

I think this is a absolut twisted way of doing it, I think it's flawed. Holger seems to also object this this general way of reasoning. If you ponder how to infere this über-symmetry, from a plain communication, then I think it's quote clear that a different angle is needed. The symmetry must be emergent.

Holger talks about the random Planck scale, and I might reason differently by one idea is that at the say "planck-scale", there maybe be so chaotic, that no observer can SEE the fully symmetries. The symmetry is only seen by a massive observer.

Instead of starting with symmetries, pictures mathematically without clear physical basis, I think we can start with "random actions", and from this picture, the complexity scale LIMITS the POSSIBLE symmetries. Complexity scale and energy scale are related, sometimes inversly, it depends on where you picture the observer is. ie. if you talk about plackscale, are you talking about a human lab probing the Planck scale in an future hyppthetical accelerator (which won't happen soon anyway) or are we picturing an actual observer living in the Planck scale. My starting point is tha latter, and then the random action is the decent abstraction.

To me there are two main points here.

1) To emphasise the "inside view", to a larger extent. In a certain sense the traditional symmetry arguments is really an "external view" of how the "inside views" RELATE, by symmetry transformations. But not that this implies a third observer, a birds view. This is not a coherent analysis IMHO.

Someone who seems to start to take this more seriously is Olaf Dreyer. Look him up. He has sometthing he calls internal relativity. He has some good way of reasoning IMO.

Rovelli's Relational quantum mechanics paper also has some good start, but rovelli gives it a different turn which IMHO breaks the coherence. But the initial reasoning is good.

The birds view must go away. It's an ugly duckling :)

2) The evolutionary perspective, that we must stop pretending that realistic models can be made on closed systems. To model open systems, evolving models are needed Ithink. Here Lee Smolin with his CNS, and several of this talks presents many good arguments in favour of this.

To me, these two ideas marry very well. And I think their combination is the key. But together the imply a dense reconstruction of what most physicists consider "basic formalism". So it's probably a long term project.

/Fredrik
 
  • #53
Fra said:
Someone who seems to start to take this more seriously is Olaf Dreyer. Look him up. He has sometthing he calls internal relativity. He has some good way of reasoning IMO.

I noticed before, which could be appropriate to mentione again here that Olaf Dreyer even submitted a paper to this time contest.
http://fqxi.org/community/forum/topic/375

Unfortunately I haven't had time to read that yet, but I iwlll

(Just to get back on topic here, sorry for the diversion)

/Fredrik
 
  • #54
apeiron said:
Nielsen's approach suggests that the initial conditions are instead what others would call a state of vagueness - a chaos of all possibilities. This also would look "symmetrical" in some lights.

This is in line with my thinking too. The symmetry is restored in my picture since the MEASURE of symmetry, only makes sense in terms of OBSERVED symmetry, and when this is constrained to a Planck scale OBSERVER, then any assymmetry would not be distinguishable.

From the point of view a true inside observer, simplicity is restored in the middle of the chaos. A human designed accelerator probing the Planck scale doesn't quality as an inside observer. Because the whole measurement setup, is rigidly attached to a massive environment, that is largely controlled. Some logic that works fine in such cases, may not work the the inside observer scenario.

Cosmological scenarios is more like the inside observer case.

This is why the treatise of statistics, and probability must be drastically different in particle physics as compared to say cosmological theories.

But not only that, I think that acknowledging the inside view, is also they key to understand the LOGIC even of the standard model of particle physics. Ie WHY does the actions look like it does, why do we see this mass spectrum and not anything else.

After all, from the point of view of the particles themselves, they are inside observers totally unaware of human accelerators. I expect the rational behind the laws of particle physics to exists in that view.

So the inside view, does not only become relevant for cosmology. I am personally more interesting in it's applications to the logic of the laws of physics in general. Including the tuning problems etc.

As far as I currently see I think an evolving universe with evolving laws is the only reasonble solution. But evolving universe on cosmological scale such as black hole reproduction, and the reproduction, copying and evolution of "compression" algortighms that encode the action of microphysics really are another perspective of the same process.

/Fredrik
 
  • #55
Fra said:
Like Niels mentions in his description, a mainstream idea is to look for symmetries under which the physical actions are invariant, and then as we do so, identify new forces and particles. Then unification usually means looking for the bigger über-symmetries that unify all previously known symmetries and provide unification. I think this is a absolut twisted way of doing it, I think it's flawed. /Fredrik

There must of course be something in this because we observe broken symmetries and so feel naturally that in repairing them, we will get back towards their original source.

But there are still two radically different ways that this same observed outcome could be achieved. There may have been one unbroken most general symmetry that crumbled in a cascade of symmetry breakings to reach its lowest energy incarnation. Or the different thought. there could have been an unformed chaos of potential (hand-waving here for the moment) which fell into all available symmetry "slots". Bit like a rain of pachenko balls getting hung up at various available levels.

So the first way of looking at it would demand some super-symmetry. All the observed symmetries of the standard model would have to fold back neatly into the one E8xE8 or whatever. And we know that project is not working so well.

The second way would instead say that all the various symmetries would not have to be related in this fashion. They would not have to heal to one super-symmetry. Instead, they culd just be the collection of symmetries that (somewhat platonically) exist as resonant forms. And any cooling of chaotic potential would congeal in the various slots to various degrees.

This is a very untechnical description but it is a sketch of an alternative story that could result from taking an approach along the lines Nielsen seems to be exploring.

You say the symmetry must be emergent. This other view would be saying the broken symmetry - the crisp result - is emergent! And the ultimate symmetry never existed. The evolution (or rather development) is from the formless to the formed.

Now you would have to call that a very twisted view. But it is like a phase transition approach. Symmetry is discovered waiting as vapour cools to water then cools to ice.


Fra said:
1) To emphasise the "inside view", to a larger extent. In a certain sense the traditional symmetry arguments is really an "external view" of how the "inside views" RELATE, by symmetry transformations. But not that this implies a third observer, a birds view. This is not a coherent analysis IMHO. /Fredrik

I'm not clear here whether you are thinking along my lines here or not.

But, anyway I have come across the internalist argument in a different context.

https://webmail.unizar.es/pipermail/fis/2005-January/000868.html

Fra said:
2) The evolutionary perspective, that we must stop pretending that realistic models can be made on closed systems. To model open systems, evolving models are needed Ithink. Here Lee Smolin with his CNS, and several of this talks presents many good arguments in favour of this.
/Fredrik

Here I agree completely that the open systems perspective would be more fundamental. My own background is in neuroscience and theoretical biology. So I am most interested in the areas where physics is starting to learn from biology and complex systems thinking.

Please don't take offence but Smolin takes perhaps the wrong part of biology. The focus should be on theories of development and self-organisation rather than theories of evolution and selection. And Nielsen is more in the development camp I feel.
 
Last edited by a moderator:
  • #56
Fra said:
From the point of view a true inside observer, simplicity is restored in the middle of the chaos. /Fredrik

Now this echoes my own preferences more clearly. For cites on those considering the quantum realm as a vagueness:

Chibeni, S. S.: 20004b, ‘Ontic vagueness in microphysics’, Sorites, n. 15 (December 2004), 29-41.
French, S & Krause, D [2003] 'Quantum vagueness', Erkenntnis 59, pp. 97-124.
Lowe, E J [1994] 'Vague Identity and Quantum Indeterminacy', Analysis 54, pp. 110-4.
 
  • #57
apeiron said:
There must of course be something in this because we observe broken symmetries and so feel naturally that in repairing them, we will get back towards their original source.

There is nothing wrong with symmetry arguments in itself. My objection is however, the mathematization or the arguments, where symmetry arguments have more to do with mathematical beauty than it has to do with science, or as I think of if, foundation in communication processes.

I agree symmetries are important. But it's how they are treated. (Like I noted, I choose to focuse on intrinsically "observed" symmetries, and in this picture, generaly symmetries are evolving and thus emergent.

It's this mathematical beauty trend, that I simply don't get. It's too much toyery and I find no faith in the methodology. It's not the symmetry itself that I dislike. On the contrary do I expect the symmetry to be physical, not just mathematical. And physical to me means attached to interaction processes and thus be infered from the interaction/communication process.

apeiron said:
You say the symmetry must be emergent. This other view would be saying the broken symmetry - the crisp result - is emergent! And the ultimate symmetry never existed. The evolution (or rather development) is from the formless to the formed.

Now you would have to call that a very twisted view. But it is like a phase transition approach. Symmetry is discovered waiting as vapour cools to water then cools to ice.

It's not at all twisted to me. I think this is a less speculative view. Ie. the symmtries are emergent as evidence is acquired, and formed. Not speculated, external ad hoc conjectures.

The phase transition analogy is decent, but that would be how an external observer describes it. The inside observer does not SEE the phase transition process, it just sees an emergent symmetry, not the mechanism behind it.

To me the principle of minimum speculation is a guiding principle. But without illusions of universal simiplicity. It's the evolution that flows according the the principle of least speculation.

apeiron said:
I'm not clear here whether you are thinking along my lines here or not.

But, anyway I have come across the internalist argument in a different context.

https://webmail.unizar.es/pipermail/fis/2005-January/000868.html

thanks, I'll chekc this later!

apeiron said:
Here I agree completely that the open systems perspective would be more fundamental. My own background is in neuroscience and theoretical biology. So I am most interested in the areas where physics is starting to learn from biology and complex systems thinking.

Please don't take offence but Smolin takes perhaps the wrong part of biology. The focus should be on theories of development and self-organisation rather than theories of evolution and selection. And Nielsen is more in the development camp I feel.

I have no _formal_ background in bio at all, but I have studied some biochemistry and molecular biology as side projects and I am also very much inspired but this. I agree with you that in some respects, physicists seem to have a lot to learn from biology and complex living systems.

I also agree with you about Smolin :) I like him, because he is one of a few physicists that reasons in a way to my liking, but that doesn't mean I like everything. I do not. But as always, take what you like and leave the rest, if that makes your own coherent soup :)

/Fredrik
 
Last edited by a moderator:
  • #58
Fra said:
There is nothing wrong with symmetry arguments in itself. My objection is however, the mathematization or the arguments, where symmetry arguments have more to do with mathematical beauty than it has to do with science, or as I think of if, foundation in communication processes.

Symmetry in physics, is more like a hidden way of reasoning by adding specualtive constraints. And there is nothing wrong with this, but if we stop pretendening it's sometihng it's not, and instead try to understand what it's pure nature is, it will be even more powerful.

If you look at symmetry as a statement, containing information about a system. One realized that there is a massive amount of information implicit in a symmetry assumption. If you add this to the idea that information capacity is limited by energy and mass, then it's not really a stretch to see that the confidence a symmetry, seen as an acquired information, scales with the information capacity.

So the flat ideas of universal symmetries that gives us "constraints for free" lacks physical basis IMHO. The reason why this happens to work fine, so far, is related to smolins argument he made in a talk on time, and evolving laws. (available as mp3 somehwere, I saved the mp3 but forgot from which talk it came). the difference is thte inside vs outside observer. Closed subsystems vs open systems.

Edit: This is almost like the "spherical cow" metaphor. Who has not seen the typical physicists abstraction: consider a closed system, with initial conditions, ...

Now, all premises must have resulted from experience, or communicatin processes. One might certainly question, how does the process look like, whereby you conclude tht a system is closed. And is the confidence in that conclusion infinite? (It's certainly not.) So closedness as it, either it's closed or itns not, ins't even a valid premised. I think there is no physical process whereby you can arrive at that premise.

You can THINK that the system is closed, and act accordingly. This we do all the time. But this is something different. It means that the action should reflect this uncertainty, and not run into hard inconsistencies if the premises were wrong. Instead it should respond promptly to feedback that are in contradiction to the prior premise, revise and evolve. The ridigly constructed models, in the past spirit rarely allow such actions. Instead it would be considered and inconsistency and run into a HALT. I have yet to see nature run into a HALT.

/Fredrik
 
Last edited:
  • #59
Fra said:
I noticed before, which could be appropriate to mentione again here that Olaf Dreyer even submitted a paper to this time contest.
http://fqxi.org/community/forum/topic/375

Unfortunately I haven't had time to read that yet, but I iwlll

(Just to get back on topic here, sorry for the diversion)

It's clear that Dreyers has good ambitions and that he has yet a lot to do. But nevertheless some of this basic motivation is something that I fully share. I think I may differ in a few ways, but regardless of how he arrived at the position(something that I can't read out of his papers), he mentions a few keys that makes perfect sense to me and is right in line with how I think.

"The problem of time arises because the metric is distinct from matter fields. In this essay we argue that it is this split between geometry and matter that is to blame for the problem of time together with a number of other problems."

As he argued previously, matter and geometry must emerget together. Their separation is artifical, and this artifical split is the root of several problems. I personally fully agree with this.

The point where I am not sure what he means is this

"What then is the nature of time that we are proposing? In internal relativity time appears in two different ways. The first notion of time would be completely familiar to a Newtonian physicist. This time is the background time of the theory. The second notion of time would be less familiar to a Newtonian physicist. It arises because the presence of a background time does not imply that an observer has access to it. Rather, the behavior of matter determines what the observers will measure. As we have seen this means that the internal time as it is measured by an observers differs from the background time."

The fact that he acknowledges the importance of what inside observers measures is good, and the most important thing. But I have a feeling that he misses what I think is a point. What worries me is that he intendts to use an nonaccessible background structure, to explain internal relations. That is somehow in despite of his great ambitions dangerously close to the standard view (although he reformulates it), so I wonder if he will solve anything by this. But it could be that I misunderstand him there.

He writes that matter determines what the observers will measure, and I agree, though I think of it as matter somehow is the observer, or makes up the observer.

Thus it is completely clear, that never ever will there be an observer, to observer empty space. Because the mere fact that there is an observer, suggest it's not empty. This is why I think the the "pure gravity" approaches are doomed to not possibly be consistent as a proper measurement theory.

I rather think of time, as exisitin only within an observer history. Even though he also says that what is important is the observed time and space, I do not think or like the idea that it has to be expressed in terms of an "inaccessible" external structure. Not just for esthetical reasons, but because this very image also needs a physical basis.

The relation between the inside views, is again, only speakable from an inside view. Correlation between different clocks are then just emergent. Maybe this is what Dreyer mans with "not accesible external time", as in a non accesible emergent symmetry. If so we are quite close.

I'm excited to see any of Dreyers future papers. I think his ideas are interesting. The best of the essays I've read, but I haven't gotten around to read all unforunately.

/Fredrik
 
  • #60
apeiron said:
The second way would instead say that all the various symmetries would not have to be related in this fashion. They would not have to heal to one super-symmetry. Instead, they culd just be the collection of symmetries that (somewhat platonically) exist as resonant forms. And any cooling of chaotic potential would congeal in the various slots to various degrees.
...

Please don't take offence but Smolin takes perhaps the wrong part of biology. The focus should be on theories of development and self-organisation rather than theories of evolution and selection. And Nielsen is more in the development camp I feel.


From my perspective, the key issue is how information is physically defined / measured / communicated. We know that the world does in fact do this, because of course we all actually experience a tremendous amount of information in our physical environment. But physics has not traditionally explored how a self-communicating system might work -- i.e. provide ways to define and communicate every one of its own elements in terms of other measurable elements.

If we knew how to describe the world this way, as a self-defining communications system, then perhaps we could envision the original universe as something like a quantum vacuum, where anything at all can and does happen -- but in such a lawless and structureless environment there would be no way for anything that happens to make a difference to anything else. So in effect there is nothing, all events remain merely "virtual."

But we could imagine that within this chaos there happen to be any number of different networks of interlinked events... and that among these could be a network that happens to be able to define and communicate something, internally. And then that minimal shared definition might be the basis for a sub-set of that network of events, that could define more complex and specific information within the framework established by the first.

I don't know whether this will be at all evocative to you -- I hoped to say more, but my son just arrived for a visit unexpectedly and my morning plans must change. Anyway, talk about hand-waving!...

But this relates to the question of symmetries in the following way. A symmetry basically means, we can define X (say the circumference of a circle) without needed to specify Y (any specific point on the circle). So wherever there is a symmetry, it's pointing to one type of information being potentially more fundamental than another, i.e. needing less of the structure of the physical world to be meaningfully definable and communicable.

So if the world evolved through a sequence of levels -- i.e. networks of events capable of defining more and more information, given the informational structure established by lower levels -- then that would manifest itself in a highly evolved structure containing many different kinds of symmetry.


I have my own prejudices about "self-organization" vs "evolution" in biology -- but will have to wait. Oh yes, and this thread is about time in physics! Will get back to that later too.

Thanks -- Conrad
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 51 ·
2
Replies
51
Views
10K
  • · Replies 35 ·
2
Replies
35
Views
11K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 48 ·
2
Replies
48
Views
1K
  • · Replies 38 ·
2
Replies
38
Views
9K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
5
Views
4K