On Fine-Tuning and the Functionality of Physics

  • Thread starter Thread starter ConradDJ
  • Start date Start date
  • Tags Tags
    Physics
AI Thread Summary
The discussion centers on the concept of the universe's finely-tuned physics, suggesting that its complexity may indicate an evolved functional system rather than a random occurrence. While some propose theories like Smolin's cosmological natural selection, skepticism exists regarding their explanatory power concerning actual physics. The anthropic principle is mentioned as a way to rationalize fine-tuning, but it is deemed unhelpful for understanding the underlying mechanics of the universe. The conversation emphasizes the need to explore how physical systems operate and interact, particularly in terms of their deterministic nature and the role of mathematics in describing these dynamics. Ultimately, the dialogue seeks to uncover the potential functionality of physics, akin to biological systems, and how this may inform our understanding of the universe.
ConradDJ
Gold Member
Messages
319
Reaction score
1
For purposes of this thread, I’m going to take it as established that the physics of our universe is very “finely-tuned” in many respects. That is, we can easily imagine alternate versions based on physics almost identical to ours, with slight variance in one or two parameters, in which no stable systems like stars or atoms could have come into existence – a universe supporting only a chaotic mess of interacting particles. This means that the quite complicated physics we find in the Standard Model – plus gravity and whatever else may be out there – seems prima facie to be extremely special and highly functional.

Obviously that doesn’t have to mean the universe has any specific purpose – for example, to support you or me, or our species. We know from biology that very complex and finely-tuned systems like us can evolve entirely by accident, via natural selection. It’s not clear how physics might have evolved, but we have Smolin’s “cosmological natural selection” hypothesis – i.e. that universes are a kind of self-reproducing organism, creating their offspring inside black holes. So it’s at least conceivable that we could explain the very special and complicated physics of our world as resulting from an evolutionary process of some kind.

Now I find Smolin’s proposal far-fetched and unattractive for a number of reasons, mainly because it tells us almost nothing about physics. Maybe he’s right that the basic function of physics is to make black holes and create more universes. But that’s pure speculation, and it’s not clear what it has to do with all the actual physics we know about.

If you look at a living organism, its functionality is obvious. It’s easy to relate almost any aspect of its structure to the functions of growing the organism and helping it survive, and ultimately of reproducing its species. But the functionality of physics doesn’t seem to be obvious at all. Physicists have always imagined the world as a formal structure based on mathematical principles, not as a system that has to do anything in order to exist.

So it’s tempting just to dismiss the fine-tuning of physics as an observer selection effect, per the “anthropic principle”. I.e. – of course the universe is structured to support the existence of complex systems, because it if weren’t, we wouldn’t be here to observe it. That’s true but completely unhelpful, again because it tells us nothing specific about what physics does or how it works.

Now my take on the situation is this. I think the “fine-tuning” of so many different aspects of physics is strong evidence that the universe is a highly evolved functional system. As to why we don’t see this functionality – actually I think we do see it, everywhere in physics; we just don’t recognize it as such. What the physical world is doing could very well be complicated, like the reproductive process at the basis of biology, and just as in biology, many different sub-functions may have evolved to support it. I think the problem is that all these functions are so basic to the way the physical world works that we tend to take them all for granted.

I’ll put a few examples of what I have in mind in the following posts. These are all things we more or less take for granted about physics – things that don’t seem to need explaining because “that’s just how the world is.” Briefly:

  • Physical systems “obey” mathematical equations.
  • Atoms function as “building-blocks” for many kinds of material structure.
  • Physical systems store information over time.
  • The properties of systems are measured by and communicated to other systems.
These are all complex topics in themselves, but I’m hoping to stay focused on this primary question – do they all contribute to some basic functionality that we might understand as a reason for the finely-tuned physics we observe? The point here is not to impose any a priori principle from outside empirical physics, but to see what physics itself has to tell us if we try to look at it from a functional standpoint.
 
Physics news on Phys.org
As an example of something that goes on everywhere in physics, at least in the macroscopic domain – how is it that physical dynamics can be so amazingly “deterministic”? We know that to very high precision, the way systems move and change “obeys” mathematical laws – in fact, quite a variety of different laws. But how exactly do they manage that?

The thing is, mathematics has no way actually to compute the dynamics even of very simple systems, except by approximation. There is no equation that describes the motion even of three idealized point-particles interacting via Newtonian gravity in Euclidean space-time – let alone the dynamics of any real physical system.

In other words, physics is far more powerful than mathematics in its ability to define precise regularities of motion in interacting systems. And it operates just as efficiently with systems involving huge numbers of particles, where many different kinds of interaction go on at once. So how does that happen?

Usually this doesn’t seem like a question physics can or should even try to answer. “That’s just how the world is,” it’s not something to be explained. In fact, since we take this thing of “obeying laws” for granted, we find it strange and almost paradoxical that at the quantum level, the world is not “deterministic”. The Schrödinger equation and other aspects of QM that do seem to “obey” causal principles make sense to us, but the rest seems crazy.

But if we put this in the context of the question about functionality, this could well be a primary aspect of what physics does, and does extremely well. In that case, everything we’ve learned about QM superposition, measurement, decoherence and so forth would be crucial information about how this business of “determining” actually gets done, in physics. To me that makes more sense than the notion of systems magically “obeying equations” that don’t actually exist in mathematics.

We don’t know how this quantum mechanism works, but apparently it involves a random selection that happens when systems interact with each other and communicate the results to other systems. In Newtonian mechanics, the “state of a system” at a given point in time contains an infinite amount of information, so even predicting the exact behavior of a two-body system – for which there is an equation – would require an infinite computation. It seems that QM has a much more practical and efficient way to define and process physical information, operating with probabilities and approximate measurement-outcomes instead of trying to determine a mathematically exact reality.
 
Here’s another example – we take it for granted that atoms are stable, that each type of atom has distinct properties identical to all others of its type, and that they stick together to make molecules. In other words they work very effectively as standardized “building-blocks” for all the distinct forms of matter we see around us. But what exactly does it take to make a functional building-block?

So far as I know, there isn’t any easy and obvious way to make something work like this, based on simple mathematical principles. Newtonian mechanics certain doesn’t do the job – if atoms were tiny solar systems held together by gravity, then no two would be exactly alike, and they’d fall apart as soon as you tried to put two of them together. As for electromagnetism, you can’t even get two charged particles into a stable orbit without invoking several quantum principles.

In our universe, it takes a finely-tuned combination of many different physical laws to make an atom – starting with electromagnetism, the exclusion principle based on particle spin, and whatever it is that gives large masses to the nuclear particles. The large nuclear mass gives the atom a fairly well-defined position and momentum, while the much lighter electrons live in highly-structured “shells” with well-defined energy levels, which let atoms hook up with other atoms in several different ways, without disrupting their structure. The central nuclear charge – that binds a specific number of electrons and so maintains the distinct character of each type of atom – is unaffected by these combinations.

Atoms also function as tiny "clocks and rods" that define intervals in space and time quite precisely. Without atoms in the universe, it seems there would be no physical means of measuring anything. And despite quantum fluctuation, the angles between the atoms in a molecule are very exact, giving each type of molecule its distinctive chemical properties, and also supporting stable, well-defined material structures at a scale billions of times larger than molecules. On top of all that, atoms are sensitive detectors of electromagnetic radiation, that can store data from past interactions in the energy-levels of their electron-shells.

Prima facie, it seems that the existence of atoms might be as significant for understanding physics as the existence of organisms is for biology. But the question about how to build a functional “building-block” – or a physical clock or a measuring-rod – doesn’t arise so long as we’re thinking of physics essentially as a formal, mathematical structure.
 
http://arxiv.org/abs/1008.3177
"It is worth noting that any system of equations can be derived from a variational principle: Simply multiply each equation by an undetermined multiplier, add them together, and integrate over spacetime (for PDE’s) or time (for ordinary differential equations). Such an action principle does not add any insights, and probably has no practical benefit. What we want in an action principle is an encoding of the equations of motion without the addition of any extra variables."
 
ConradDJ said:
We know that to very high precision, the way systems move and change “obeys” mathematical laws – in fact, quite a variety of different laws. But how exactly do they manage that?

The thing is, mathematics has no way actually to compute the dynamics even of very simple systems, except by approximation. There is no equation that describes the motion even of three idealized point-particles interacting via Newtonian gravity in Euclidean space-time – let alone the dynamics of any real physical system.

In other words, physics is far more powerful than mathematics in its ability to define precise regularities of motion in interacting systems. And it operates just as efficiently with systems involving huge numbers of particles, where many different kinds of interaction go on at once. So how does that happen?

You are practically declaring the answer in your question. How does the universe instantaneously take everything into account and calculate with absolute precision for every particle in nature? It seems obvious that the "mathematics" that the universe uses must be based on taking everything into account at once. And this means that all events must exists in conjunction with each other, reality is a conjunction of all of its parts, nothing in reality contradicts anything else in reality, all facts imply the others. And the math would have to assigns coordinates to every event and accounts for every implication with some kind of function. It would be interesting to see if anyone has come up with any physical equations this way?
 
ConradDJ said:
Now I find Smolin’s proposal far-fetched and unattractive for a number of reasons, mainly because it tells us almost nothing about physics. Maybe he’s right that the basic function of physics is to make black holes and create more universes. But that’s pure speculation, and it’s not clear what it has to do with all the actual physics we know about.

If you look at a living organism, its functionality is obvious. It’s easy to relate almost any aspect of its structure to the functions of growing the organism and helping it survive, and ultimately of reproducing its species. But the functionality of physics doesn’t seem to be obvious at all. Physicists have always imagined the world as a formal structure based on mathematical principles, not as a system that has to do anything in order to exist.

Hello Conrad, your long post contains a good question if interpret it right.

I do not consider smolins CNS anywhere near a satisfactory answer either, even though it may still be part of it in some way.

Please correct me if I wrong but to rephrase your long post but I think you ask, that IF the universe and the laws of physics we see are a result of some kind of evolution in darwinian style, then there must have some discriminator to understanf why some laws are more fit than others? Ie with is the function/utility of phhysical law that can allow this idea to make sense? Something like "reproduction" etc.

Sure we have CNS, but I agree with you that it seems unlikely to be the full depth answer.

I've been thinking about this and to speak for myself I have an idea on this what works for me.

The evolution of law are not really fundamentally different processes than ordinary dynamical evolution, it's just that what we physicists normally mean by dynamical evolution is pretty deterministic, and darwinian evolution is more like a random walk. Our attempt to understand uncertainty in QM, is to constrain the random walk by deterministic probabilities.

I think that each observer encodes physical law, and that this law has evolved to become "objective" withing the local population of interaction observers, simply because it's the only way for that observer to stay stable. An observer that doesn't revise it's strategy in compliance with it's context, are doomed. Further as to "replication" - each observer certianly "contributes" to the environment as well, putting selective pressure on all other observers to also revise and align - this is essentially the "reproduction mechanism". This is drastically different than smolins idea I think.

So one possibility is to simply view the "evolution" of law, as an ongoing process in this universe, and the specific laws we see here, are simply a result of an evolutionary equilibration process.

So what we interpret as forcing laws, or determinism (á la structural realism) is in My view nothing by an equilibrium.

OF course, one may ask, what does this solve? If basically the choice of laws, are the choice of equilibrium? Is the equilibrium unique? Here the question becomes harder and it's still open... but it gives a much better understanding, and there is a clear connection between the choice of equilibrium and the population of the universe. Ie. material systems and laws.

ConradDJ said:
We know that to very high precision, the way systems move and change “obeys” mathematical laws – in fact, quite a variety of different laws. But how exactly do they manage that?

I think of this as an equiblirium, and that there are fluctuations around the laws, but it's often possible to reinterpret them as "statistical laws". This is how we do it. If we noticed a physical system DISobeying the laws; I can bet some money on that no physicists would interpret it as that - the interpretation would be either dismissing it as a bad data point, or discovery of a new interaction. This is why even the process of INFERENCE of hte laws, from the point of view of another observer(experimenter) does enter this view.

/Fredrik
 
Last edited:
This reminds me of a SF story by Stanislav Lem, according to which the order and harmony of the natural laws were a consequence of a clean-up operation by aliens.
 
bcrowell said:
"For purposes of this thread, I’m going to take it as established that the physics of our universe is very “finely-tuned” in many respects."

But this is not necessarily true:
Looking for Life in the Multiverse, Jenkins and Perez
Anthropic constraints on fermion masses
Quark Masses: An Environmental Impact Statement

I certainly wouldn't argue that our universe operates on the only kind of physics that could support life, or other kinds of complex structure.

And if there were only one or two physical parameters that seemed to be "finely-tuned", it would be more than reasonable to see it just as a coincidence that tells us nothing about the world. The fact the the world we live in is a very special and "highly improbable" one isn't significant in and of itself.

But the fact that we run into apparent fine-tuning in so many different aspects of physics and cosmology takes the issue beyond coincidence. It raises the question about why this particular combination of complex principles works to support a remarkable universe like ours. What does it take to do something like this?

So the point of fine-tuning, for me, is not to prove anything but just to raise the question about functionality -- i.e. what the world is doing, that's apparently not at all easy to do. If we take as a hypothesis that there is a single key functionality here -- by analogy to the functionality of reproduction in biology -- then we can look at some of the basic features of the physical world as clues to what sort of functionality this is.

Again, I'm not taking fine-tuning as "proof" that the world is a functional system. It just points to that as a possibility that I think is worth considering. It gives us a very different point of view from which to try to interpret the vast body of knowledge we have about the physical world.
 
  • #10
friend said:
How does the universe instantaneously take everything into account and calculate with absolute precision for every particle in nature? It seems obvious that the "mathematics" that the universe uses must be based on taking everything into account at once. And this means that all events must exists in conjunction with each other, reality is a conjunction of all of its parts, nothing in reality contradicts anything else in reality, all facts imply the others. And the math would have to assigns coordinates to every event and accounts for every implication with some kind of function.

Thanks for the reply... but I don't see that your conclusions have a basis in empirical physics... or mathematics. There is no "absolute" precision in nature, for one thing. And in the structure of spacetime, all events don't necessarily exist "in conjunction with each other" -- for every event there is a specific spacetime region ("past light-cone") containing all the information that would be relevant to it. And it's not at all obvious to me that "taking everything into account at once" would make the calculation problem easier.

The key point is that while mathematics is amazingly powerful in its own ideal world -- look at the Mandelbrot set for an illustration of what I mean -- its power is very limited when it comes to the kinds of situations we run into in physical dynamics. The Newtonian 3-body problem is a simple example -- in case you haven't come across this before, it's been proven there is no "analytic" solution to this. I understand that to mean that no combination of the simple functions that mathematics is built on reproduces the motion of 3 gravitating bodies -- although that motion is "strictly deterministic" (in classical physics) and you can get an arbitrarily close approximation using mathematical methods.

In other words, even the simple, classical physics of interacting particles has a kind of complexity that's quite different from the kinds that are "native" to mathematics. That's not to say mathematics is irrelevant to physics! Obviously it's exactly the right tool for describing specific aspects of what the physical world does.

But the point I'm trying to make is that we should be looking to physics itself to understand what the physical world is doing, rather than assuming a priori that the world is a formal system built on logical / mathematical principles.
 
  • #11
ConradDJ said:
But the point I'm trying to make is that we should be looking to physics itself to understand what the physical world is doing, rather than assuming a priori that the world is a formal system built on logical / mathematical principles.

Can we build a theory without logic and mathematics? Are we going to prove that somewhere the universe is not logical? I don't think there is any alternative except that ultimately the universe must be a manifestation of logic, which we describe with math...somehow.
 
  • #12
Fra said:
I think that each observer encodes physical law, and that this law has evolved to become "objective" withing the local population of interaction observers, simply because it's the only way for that observer to stay stable. An observer that doesn't revise it's strategy in compliance with it's context, are doomed. Further as to "replication" - each observer certainly "contributes" to the environment as well, putting selective pressure on all other observers to also revise and align - this is essentially the "reproduction mechanism". This is drastically different than smolins idea I think.

So one possibility is to simply view the "evolution" of law, as an ongoing process in this universe, and the specific laws we see here, are simply a result of an evolutionary equilibration process.

Hi Fredrik -- One major difference between your idea and Smolin's, if I understand correctly, is that the evolutionary process you have in mind operates all the time (at a "sub-quantum" level?) in our universe. So the process by which we get the seemingly permanent laws of physics that we see in the universe today, is related to the physical processes underlying everything we observe.

Smolin was thinking of physical law as just "given" somehow in the structure of each universe, so that there would be no question of laws evolving during the history of the universe itself. But in his more recent work he has been emphasizing the uniqueness of this universe and attacking the "multiverse" idea, and also the division of physics into "laws" and "initial conditions" -- so I'm not sure where his thoughts about evolution are heading now.

Another thing I appreciate is that you're trying to describe a basic functionality that could conceivably evolve. The problem is how to relate "the observer encoding physical law" or "putting pressure on other observers to revise their strategy," etc. with actual known physics.

What I'm trying to do in this thread is to get on the table some of the big, obvious things we know about physics but take for granted -- on the grounds that if there is a basic functionality to the universe, then it should be visible in almost every aspect of physics. So my question for you is how the logic of inference you're working with might fit into "what the physical world is doing," in this big picture we're trying to bring into focus.
 
  • #13
Here’s my third example of what we take for granted about physics, that could be an important feature of “what the world is doing” –

It’s hard even to conceive of reality without assuming that things with definite properties continue to exist through time. Or, in other words, that there’s information stored out there in the physical world. This is certainly something we all take for granted – except that it’s remarkably difficult to verify this assumption anywhere in fundamental physics.

Macroscopically, of course it holds true. But the closer we look, the more questionable it gets. Take an electron – an “elementary particle” with a certain definite charge and rest-mass that stay constant over time, the same for all electrons. But the underlying theory is nowhere near that simple. The electron charge generates a field, that combines with the fields generated by other particles, and also acts back on the electron itself. Even in the equations of classical electrodynamics this back-action creates problems with infinities, and in the quantum theory all kinds of “virtual” interactions have to be taken into account as well. All of these things affect the measured values of the electron’s mass as well as its charge, so in fact we can’t tell exactly what the “real” mass and charge of an electron is.

This kind of situation appears everywhere in quantum physics – that is, what looks at first like a fairly simple set of facts turns out to be the net result of an infinite number of random “virtual” events each contributing to the information we observe. Nowhere does the theory show us anything just “sitting there” continuing to be what it is, over time, like a classical particle on an inertial trajectory through space.

Again, I’m not trying to prove anything. Even in quantum physics it’s meaningful to talk about systems “having” certain properties and states, even though the “properties” are represented mathematically by infinite series that may not converge, and the “states” by superpositions of all possible states.

But the nature of quantum theory should at least make us question whether this business of “storing information over time” should be taken as a basic, built-in feature of reality, rather than part of what the statistical operations of quantum mechanics somehow accomplish. After all, the basic fact underlying QM is that all interaction takes place in discrete, momentary events – what Planck once called “atoms of happening”. Quantum events don’t last through time... but they can communicate information that lasts, if it keeps on getting communicated again and again.

So when we talk about an electron, we might really be talking about an observed “appearance” that persists over time, made of information passed on through the web of momentary interactions, rather than a "real thing-in-itself" that automatically continues to exist, carrying its definite properties through time.
 
  • #14
ConradDJ said:
Smolin was thinking of physical law as just "given" somehow in the structure of each universe, so that there would be no question of laws evolving during the history of the universe itself. But in his more recent work he has been emphasizing the uniqueness of this universe and attacking the "multiverse" idea, and also the division of physics into "laws" and "initial conditions" -- so I'm not sure where his thoughts about evolution are heading now.

I'm not sure the two points you describe are as I understand it not a contradiction to Smolin. It's true that Smolin as far as I know his argument thinks that the laws change only when a new universe are spawned. Also the spawning of new universes in BH are not really like some other multiverses, it's more like one universe somehow producing children.

Anyway, I have a different view. Just because you don't like Smolins idea, is no reason for rejecting the entire idea of evolving law.

However the practical difference is minimal relative to what I propose. Even though I argue that in principle the laws keeps evolving ongoingly, this does not mean that the laws of physics as observer from a human observer changes (except for the obvious fact that our knowledge about laws changes). I'm just suggesting that distorting the laws of physics corresponds to an extreme form of non-equilibrium that is so bad that not even the laws are stable. OTOH, such a situation would be so unstable that it would represent only a transient state. Nevertheless do I think that this idea can increase the understanding and suggest certain research directions - in particular the idea that the microstructure of matter, and the different interactions and their unification by energy scaling, can be understood as an evolved emergence of new interactions as we scal the complexity of the observer. And there I mean that complexity scaling, or "growing larger" observes is not just a mathematical transformation, it's a physical process corresponding to the origin of mass and energy, and I think a good pictures is by evolution.

ConradDJ said:
One major difference between your idea and Smolin's, if I understand correctly, is that the evolutionary process you have in mind operates all the time (at a "sub-quantum" level?) in our universe.

Yes. I propose that instead of having a universe populated by matter, OBEYING certain laws - that are only mutated when new universes are spawned, I suggest that the population of matter in the universe encodes inferred views of law; which determines its' action, so that instead of forcing laws, there is a democracy of observers, yielding on average only - laws. BUT even these laws when inferred by a REAL observer, are bound to always evolve.

The analogy of environment telling the observers how to evolve, and the observers telling the environment how to change is similar to GR (Einsteins equation) but the difference is we take away the structural realism implicit in Einsteins equation itself, and replace it with an evolving equation. So that Einsteins equation itself would correspond to an equilibrium - where all observers have no reason to revise their coded best guess. But on top of this it's also measurement theory (unlike GR).

Howto connect to current physics and make concrete preduictions is in fact SIMILAR to ST; but with some major differences; there is no continuum; and there is no GIGANTIC landscape, since there is a small landscape that SCALES along with evolution. Also the microstructure I picture is completely different than the string, also the action is different than the string action. But other than that, there are still similar traits. So all I can offer is a motivation for a research program. But that's just about as much as what the other approaches promise as well :)

ST may think that they still generate nice mathematics; I suggest that the program I advocate would indeed generete general inference mathematics; which would be extremely interesting for AI research as well.

/Fredrik
 
  • #15
Here’s my last example of something very basic we take for granted about the world – that physical interactions between things can communicate information. Or in other words, they can function as “measurements” of something.

In classical physics this never became an issue. We assumed everything is what it is, all systems have precisely definite properties, the physical world is a body of well-defined fact – regardless of what anything or anyone observes. So the question of what it actually takes to make an observation was a purely practical one, about how to set up the apparatus for any specific type of measurement.

But in QM, interactions in general are not measurements. Systems are described as being in a superposition of states, and when systems interact, their superpositions get “entangled”. A measurement is something special, that “reduces” the superposition to the specific state that’s actually observed. Now though there are many ways to interpret this situation (some of which deny that any “collapse” occurs), there remains a basic difference between “virtual interaction” between entangled systems and interactions that convey definite information.

So it becomes a key question in QM – what constitutes a measurement-interaction? The problem is that any type of physical interaction can function as a measurement, but only in the right context. And though physics knows all about how to describe interactions, there is no clear understanding of what constitutes a “measurement context”.

I’ve raised this question here before – in this thread on “Why is anything measurable?”
https://www.physicsforums.com/showthread.php?t=393687"
(The discussion got off-topic quickly, but see page 5, posts 67 – 74.)

Despite the difficulty of the question, the point here is that QM no longer let's us take it for granted that physical interaction automatically communicates information. And QM strongly suggests that there is determinate information in the world only where there’s a context in which that information can be physically determined.

So one way of describing the basic functionality of our world might be that it “observes” itself – that it provides a physical context of measurement for all its own characteristics. Where the “reality” of classical physics is just a vast body of given fact, in the world we actually observe, the facts are continually being determined. And they get determined only where they are also communicated and become part of the context that determines other new facts.

If this picture turns out to make sense, then it’s easy to see how the other “functionalities” discussed above contribute to it. It think it’s clear that physical measurements are possible only if we can count on things “obeying laws” and behaving in precisely predictable ways. So a “self-measuring” universe would presumably have to define certain common structural principles that always apply. There would be no possibility of measurement without the existence of stable atoms and molecules, and the ability to storing information over time in the states and properties of systems is also clearly important.
 
Last edited by a moderator:
  • #16
To sum up — Not only does “fine-tuning” give us a strong reason to explore the notion that the universe is a functional entity, but we can find indications of functionality in many very diverse aspects of physics, and see how they might work together to accomplish what we call “reality”.

The main difficulty with pursuing this line of thought is that we’re so used to thinking about reality as a set of given facts, based on some underlying formal structure. And until the last few decades, physicists had brilliant success in explaining essentially everything in the world in terms of a remarkably compact set of mathematical principles. But this still leaves the question – why this particular set principles? Particularly since they’re not only complicated but in some cases quite bizarre, and not even clearly consistent with one another.

In case anyone's interested in exploring what a functional approach might involve, here are links to some other possibly relevant threads –

Evolving causality
https://www.physicsforums.com/showthread.php?t=403591"

What are the fundamental information-processes in physics?
https://www.physicsforums.com/showthread.php?t=332292"

On self-defining laws of physics
https://www.physicsforums.com/showthread.php?t=331008"
 
Last edited by a moderator:
  • #17
Without having read everything, I would like to comment on

For purposes of this thread, I’m going to take it as established that the physics of our universe is very “finely-tuned” in many respects. That is, we can easily imagine alternate versions based on physics almost identical to ours, with slight variance in one or two parameters, in which no stable systems like stars or atoms could have come into existence – a universe supporting only a chaotic mess of interacting particles. This means that the quite complicated physics we find in the Standard Model – plus gravity and whatever else may be out there – seems prima facie to be extremely special and highly functional.

Obviously that doesn’t have to mean the universe has any specific purpose – for example, to support you or me, or our species. We know from biology that very complex and finely-tuned systems like us can evolve entirely by accident, via natural selection.

Isn't "finetuning" an indication in physics that we're just overlooking something? As examples I think of Einstein's static universe and cosmological constant (the universe is not static), Ptolemaeus' geocentric model (the Earth is not the centre) etc. If we have an explanation for it it wouldn't be finetuning anymore, I would say.
 
  • #18
haushofer said:
Isn't "finetuning" an indication in physics that we're just overlooking something? As examples I think of Einstein's static universe and cosmological constant (the universe is not static), Ptolemaeus' geocentric model (the Earth is not the centre) etc. If we have an explanation for it it wouldn't be finetuning anymore, I would say.

Yes, this makes sense. And if it were only one or two aspects of the current model that seemed to be "finely tuned" it might well point to some specific assumption that needs to be revised.

But since so many very different aspect of the model seem to be "required" for any of the kinds of physical structure we see in the universe, it makes sense to me to try thinking about the world as pervasively "functional", in the way a living organism is pervasively functional. I.e. many different kinds of structure working together to accomplish something (in the case of an organism, reproducing its species).

In other words, I would look at all the different formal / mathematical principles of the current model as having different functional roles in relation to "what the universe needs to do" in order to exist. Rather than envisioning some single underlying formal principle -- a single field-equation, for example -- that somehow explains all of them.

It might be worth noting that in biology one sees examples of "unification" everywhere -- e.g. humans and chimpanzees derive from a common ancestor. If we had a complete fossil record, we could go back and find very simple organisms, at the source of all the different life-forms on Earth. But we would be mistaken if we looked for an explanation of life in its formal simplicity! And I'm thinking that we may be making just this kind of mistake in physics -- pursuing mathematical "unification" as an end in itself.
 
  • #19
ConradDJ said:
In other words, I would look at all the different formal / mathematical principles of the current model as having different functional roles in relation to "what the universe needs to do" in order to exist. Rather than envisioning some single underlying formal principle -- a single field-equation, for example -- that somehow explains all of them.

You seem to be saying that all things are consistent with some underlying functionality. Even in that case, the functionality becomes the formal principle which determines the rest of physics. So in any case, we are still seeking some underlying principle (functionality perhaps) from which all of physics is derived so that all the fine tuning is inevitable.
 
  • #20
friend said:
You seem to be saying that all things are consistent with some underlying functionality. Even in that case, the functionality becomes the formal principle which determines the rest of physics. So in any case, we are still seeking some underlying principle (functionality perhaps) from which all of physics is derived so that all the fine tuning is inevitable.

Yes. I agree.

Although physics is good at mathematically describing and rationalising what is observed and measured, it is not quite so good at anticipating the often unexpectedly complicated outcome of clever processes that nature has devised. Especially self-promoting ones; those that make it easier for the same process to continue or repeat itself. Think of how gravitational accretion unexpectedly causes stellar jets to form; how self-promoting fluvial erosion causes complexities like the Grand Canyon and how the self-promoting chemical replication of DNA is responsible for the biological complexities we live among.

Conversely, in an universe filled with complicated stuff, much of which seems to be the ultimate result of various self-promoting tricks-of-nature, I think looking for some 'underlying functionality' is carrying reductionism too far. I can't find 'functionality' in my dictionary, anyway.

It is as if physicists "seek him here, ... seek him there, ... seek him everywhere. Is he in heaven?—Is he in hell? That demmed, elusive Pimpernel."
 
  • #21
friend said:
You seem to be saying that all things are consistent with some underlying functionality. Even in that case, the functionality becomes the formal principle which determines the rest of physics. So in any case, we are still seeking some underlying principle (functionality perhaps) from which all of physics is derived so that all the fine tuning is inevitable.


Well, I was trying to make a distinction between formal and functional explanation that apparently doesn’t seem significant to you and oldman... and maybe it isn’t very clear in relation to physics. But in biology, it’s not that the structure of organisms is “consistent with” Darwinian evolution, as if reproduction and evolution were “formal principles” that life obeys. And there’s nothing that indicates the evolution of life on Earth is “inevitable”. Nor could we “derive” any of the details of biological structure from evolution.

On the other hand, we can understand most of those details – the “clever processes nature has devised” – as accidents that proved to be very useful in relation to the requirements of staying alive and reproducing the species. Evolutionary theory is very weak at prediction, but very powerful at making the world intelligible.

So if there were an “underlying functionality” in physics, we wouldn’t expect it to “determine the rest of physics.” We would expect it to show us how an essentially random chaos of lawless interaction comes to look so highly structured and precisely lawful, and what each of the different aspect of physics contributes to making the whole thing work.

If our universe evolved, then presumably it could have evolved in many other ways, leading to other laws and spacetime structures very different from ours – that would also have been very “finely-tuned” to work the way they work. And we would be able to understand the details of physics in our universe not as “inevitable” in any way, but as the comprehensible results of a unique history.
 
  • #22
oldman said:
Conversely, in an universe filled with complicated stuff, much of which seems to be the ultimate result of various self-promoting tricks-of-nature, I think looking for some 'underlying functionality' is carrying reductionism too far. I can't find 'functionality' in my dictionary, anyway.

It is as if physicists "seek him here, ... seek him there, ... seek him everywhere. Is he in heaven?—Is he in hell? That demmed, elusive Pimpernel."

I'm sorry about your dictionary... But anyhow, I hope the above post makes clear that this is not "reductionism". Or is it reductionist to say that all of biology stems from and responds to the necessity that every species find some way to reproduce itself? That doesn't mean, by the way, that all the "self-promoting tricks of nature" have a reproductive function, in biology. Biological evolution makes all kinds of things possible, many of which have no function at all and just happen to evolve through "genetic drift." But reproduction is still the basis of the process.

The other point I was trying to make in the posts above is that the “functionality” we’re talking about is not necessarily something elusive – a mysterious “secret of the universe” we have yet to discover. I suspect on the contrary that we don’t understand it clearly because it’s too obvious and too basic to everything in physics.

That was also true in biology. Darwin was certainly not the first to realize that organisms reproduce themselves! And it was no great discovery either that some members of a species are better at reproducing than others. But he was the first to realize the implications of these obvious facts of life – and he probably wouldn’t have achieved that if the notion of evolution over geological time had not already been “in the air.” Even once he understood that evolution operates the same way pigeon-breeders do, through selective reproduction, it took many years to convince himself and others about something that nowadays is practically self-evident (at least to biologists).

So basically this boils down to a question – since we find ourselves in a physical world that seems to be “finely-tuned” in many respects, is there something very obvious we’re taking for granted about physics, that might lead to a similar kind of understanding? For example, that things in the world are “determinate” and "obey laws" and are “observable”? This is all so basic and necessary to our experience that it’s still very hard for us not to take it for granted, even after many decades of wrestling with the meaning of QM, which calls all of these things into question.

From a “formal” standpoint, there are a lot of complex facts about the world that we’re trying to "derive" from some underlying mathematical principles. But maybe just the fact that there are observable facts has implications for physics that we haven’t understood.
 
  • #23
ConradDJ said:
... this is not "reductionism". Or is it reductionist to say that all of biology stems from and responds to the necessity that every species find some way to reproduce itself? That doesn't mean, by the way, that all the "self-promoting tricks of nature" have a reproductive function, in biology. Biological evolution makes all kinds of things possible, many of which have no function at all and just happen to evolve through "genetic drift." But reproduction is still the basis of the process.

... which process is biological evolution. And the replication of information coded in DNA by stereochemical means is the essence of reproduction, such replication being one of nature's invented self-promoting tricks. Of course it's not the only one, just the one that underlies Darwinian evolution.

The other point I was trying to make in the posts above is that the “functionality” we’re talking about is not necessarily something elusive – a mysterious “secret of the universe” we have yet to discover. I suspect on the contrary that we don’t understand it clearly because it’s too obvious and too basic to everything in physics.

You suspect that 'it' is in full view all the time but unexpected in appearance, and therefore unrecognised --- like The Purloined Letter in Poe's story?

It would be great if this were so. The only obvious feature of physics that to me seems as if it could fit the bill is the self-consistency that physics has. In particle physics there is such an idea --- I think it's called the bootstrap hypothesis --- proposed by Geoffrey Chew quite a while ago, discarded and perhaps again becoming relevant today. Maybe the physical world behaves as it does because this is the one (and only?) way its behaviours can mesh together seamlessly? Rather as the pieces of a jigsaw puzzle fit together.

But the devil is in the details. While electromagnetism and special relativity are seamless partners, there is as yet no seamless meld of say, quantum mechanics and gravity. Not for the want of trying, though, as this forum shows!

...But maybe just the fact that there are observable facts has implications for physics that we haven’t understood.

Observable facts that all fit together seamlessly?
 
  • #24
I think you're looking for a discussion, and I've already thrown in my perspective, but here is another question, just to provoce some points...

ConradDJ said:
From a “formal” standpoint, there are a lot of complex facts about the world that we’re trying to "derive" from some underlying mathematical principles. But maybe just the fact that there are observable facts has implications for physics that we haven’t understood.

I guess you are hinting that just - MAYBE our preconception that nautre "obeys laws" etc, and thus that there must be some underlying formal system from where all can be derived - is wrong...

...could the QUEST for such "compactified" understandings in of finding formal reductions still be RATIONAL? What is it's utility? What is the "survival value" of such reductions EVEN if they are mistaken for mathematical truth?

(I have an opinon, but maybe someone else may want to comment?)

/Fredrik
 
  • #25
ConradDJ said:
But since so many very different aspect of the model seem to be "required" for any of the kinds of physical structure we see in the universe, it makes sense to me to try thinking about the world as pervasively "functional", in the way a living organism is pervasively functional. I.e. many different kinds of structure working together to accomplish something (in the case of an organism, reproducing its species).

If you want to explore a biological analogy properly, then I would say you would need to anchor it in theoretical biology - precise models of what makes live different from non-life, bios different from abios.

For instance, both bios and abios are functional in thermodynamic terms - they arrange themselves into structures that dissipate entropy. So self-organisation and fine-tuning can be explained in that context.

But then actual bios does something else. It does not just develop (which is all a dissipative structure does) but also has the secondary machinery to control and even evolve.

As Howard Pattee puts it, it uses rate-independent information to control rate-dependent processes. So for example, our genes (which store information in a "timeless" fashion), throw enzymes into the mix to control the rate of some metabolic reaction, some self-organising dissipative process.

This genetic information does not develop (it stands apart from the usual molecular wear and tear) but it does evolve - there is a process for mixing up the information every so often and trying out some new combination.

Anyway, the point is that theoretical biology makes some clear distinctions between development and development-with-evolution, between abios and bios. Functionality and fine-tuning are part of both stories, but they are two different stories.

So, if cosmological thinking is now looking for wider inspiration (as it has with Smolin), then the speculation has to respect this very critical distinction.

If you are talking just development, then that is the realm of dissipative structure theory and other "raw" forms of self-organisation.

If you are talking about evolution, or evo-devo, then that would require a universe or multiverse to have something extra, some equivalent of Pattee's epistemic cut (the separation of rate-independent information and rate-dependent processes).

Although, having said all that, Smolin's spawning black holes story is perhaps a curious hybrid - somewhere inbetween evo and devo. Every black hole, which is a white hole to the other side, has a "genetic" memory in that it provides a set of initial conditions that are rate-independent so far as the rate-dependent development of the new baby universe is concerned.

But there is then no selection as such to fine-tune the information bound up in a black hole. An unlimited supply of entropy and "space" is available so that the branching is without limit and there is no actual constraining competition between alternative recipes for particular universes. Some just happen to be more fecund than others over time.

This "biological" theory needs some source of variation so that not all black holes are alike in the first place, as well as a source of unbounded entropy, and most of all, some reason to believe in white holes.

However, again, I think it shows that biology can be a source of inspiration for cosmological theorising. And there is a well-developed set of definitions in theoretical biology that would allow for a more precise framing of theories based on "fine-tuning from functionality".
 
  • #26
oldman said:
While electromagnetism and special relativity are seamless partners, there is as yet no seamless meld of say, quantum mechanics and gravity. Not for the want of trying, though, as this forum shows!

But this could be precisely the key mistake - to expect a seamless meld in the form of a reduction of a higher emergent level of description (such as GR) to a lower foundational level (such as QM).

Yes this is the way physicists think :smile:. But it is not necessarily how all biologists think. Instead, complex, hierarchically-structured, worlds or systems arise via the synergistic interaction between bottom-up constructive degrees of freedom and top-down boundary conditions or emergent contraints. A local~global interaction.

So in this view, the systems science view, you always end up with an irreducible two-ness. You need both the local and the global to have anything arising at all.

Now an irreducible two-ness has arisen in fundamental physics - GR and QM. And their interaction quite successfully gives rise to the classical realm in which we live.

To the reductionist who wants only a theory of the local, this is a frustration. But to a systems thinker, it is only natural that we end up with theories describing both the local and the global.

The task then is not to collapse the global to the local but instead to formalise the nature of their interaction. Which is still a big task, but not the same task. It is a different way of thinking about seamless.
 
  • #27
apeiron said:
But this could be precisely the key mistake - to expect a seamless meld in the form of a reduction of a higher emergent level of description (such as GR) to a lower foundational level (such as QM).

Yes this is the way physicists think :smile:. But it is not necessarily how all biologists think...

Much of what you then say sounds grand and quite profound, Apieron. Some of today's physicists who are indeed frustrated by the inability of their theories to predict observable phenomena might well be tempted to give this biological approach a whirl. You say that:

...complex, hierarchically-structured, worlds or systems arise via the synergistic interaction between bottom-up constructive degrees of freedom and top-down boundary conditions or emergent contraints.

But can you give frustrated physicists hope by giving specific examples of how 'local-global interactions' actually work? Better still, has this approach succeeded in predicting observable phenomena? Again, examples would help.

Without the element of verifiable prediction physics has an unfortunate tendency to degenerate into a masquerade of words and squiggles. I'd better not give examples.
 
  • #28
oldman said:
But can you give frustrated physicists hope by giving specific examples of how 'local-global interactions' actually work? Better still, has this approach succeeded in predicting observable phenomena? Again, examples would help.

I can't speak for apeiron but I share some of the general traits of his reasoning, and I think that first of all this is a diffucult problem so just because one may have ideas on constructing principels doesn't mean the step to specific predictions is short.

oldman said:
how 'local-global interactions' actually work? Better still, has this approach succeeded in predicting observable phenomena?

I personally think that the quantitative description of these interactions lies in evolving interacting inference models. So what is inference models? What I mean with "Inference models" is mathematical models of how to produce various kinds of "expectations" by means of induction/deduction/abduction. These expectations then further guide the actions of the inference system. Further to this, in the generalized inference thinking, we are not talking bout deductons, but generally uncertain inferences, and reasoning based upon incomplete information. So sometimes the optimal actions are not in consistency with the feedback from the environment, then an evolution takes place where the inference system itself evolves (not just the information state, but also the state of the inference machniery).

This is not so common of physics and very underdeveloped, but there are people looking into this. There are also strong analogies to economical system theory, where the predictions are existence of equilibrium points. There are also strong analogies to learning models, similar to the hman brain, which is exactly an evolving interacting inference system, where the inferences determines the actions. Further feedback "drives" learning and evolution of the brain. Predictions could be how two brains interact. From understanding of how an inference systems works, one can make predictions of the interaction properties to two such systems.

The analogy (which is yet of course strongly underdevelped) is to consider two physical systems as two inference systems, then their interaction properties and thus the overall dynamics of the isolated system could in principled be prediced.

This isn't just foggy ideas, I think the outline a precise idea, but this is simply not how physicists traditionally absctract physics. Somehow physics has more of a tradition of reductionism and quest for eternal fixed laws. This is dominating even today. So all mainstream models today are of this structural realist form.

If we think about how GR we developed. Somehow the development of intrinsic differential geometry by Riemann, was almost as it seems a pre-requisite for the understanding of GR.

I think we are in a similar situation. We still lack a developed theory of intrinsic inference. This will be a pre-requisite to understanding & combining the "inference perspective" of QM with the observer dependent views of GR. Unfortunately the structural realist view that IS dominating has lead to another approach: that the observer invariant form of GR, IS what should somehow be subject to inference; RATHER than seeing the set of inferences beeing related by some yet unknown intersubjective rules.

So I think we need more study of mathematical infernece models. We simply lack the right structural framework to pose the questions right. Until we have, I think the ideas expressed in words is the only guide.

/Fredrik
 
  • #29
oldman said:
...the replication of information coded in DNA by stereochemical means is the essence of reproduction, such replication being one of nature's invented self-promoting tricks.

I think you have this backward. I don't think anyone has seriously suggested that DNA molecules somehow appeared along with all the molecular mechanism needed to replicate them... and then self-replicating systems emerged.

The earliest self-replicating systems would have been nowhere near this well-organized -- maybe something like pools of simple organic molecules capable of mutually-catalyzing reactions... so that when the pool happened to get splashed into several pools, those reactions could produce more of the same set of molecules, and keep themselves going.

It's very hard to imagine what very primitive "life" may have been like... literally all we know about it is that it was able to make copies that were able to make copies that made more copies. Now random physical processes do lots of very interesting things, in dissipative systems, for example. But self-replication is a very special kind of "self-promotion". The particular "trick" of making new versions of a system, which is susceptible to variations that also get reproduced in the new versions -- isn't something we see happen much. And if something like this does ever get going, it's probably very unlikely to continue for long in the generally entropic environment of physics.

But, once you have something that can copy itself, then the longer the process keeps going, the more likely it is to be able to keep going, because this kind of thing can evolve and adapt. So this is the beginning of the story... and DNA comes many chapters later, as a highly stable storage mechanism for encoded protein-building information.

It worked so well that now nearly all life makes use of it. But the point of this story is that the basis of life is a specific and very special "functionality" -- something life does, not the physical mechanisms any particular life-form evolves to do that.

If molecular biology had become possible before Darwin, we might have a situation in biology similar to the one we have now in physics. We might have a vast amount of information about an extremely complicated, very finely-tuned system of cellular mechanics, consisting of many interdependent sub-systems, each with its own modes of operation, but no fundamental principle to explain what's going on or why. We would even have a puzzle similar to QM, in that we'd see very highly ordered and predictable behavior at the "macroscopic" level of the living cell, somehow based on the nearly random activity of individual molecules within the cell.
 
  • #30
apeiron said:
If you want to explore a biological analogy properly, then I would say you would need to anchor it in theoretical biology - precise models of what makes live different from non-life, bios different from abios.

For instance, both bios and abios are functional in thermodynamic terms - they arrange themselves into structures that dissipate entropy. So self-organisation and fine-tuning can be explained in that context.

But then actual bios does something else. It does not just develop (which is all a dissipative structure does) but also has the secondary machinery to control and even evolve.

As Howard Pattee puts it, it uses rate-independent information to control rate-dependent processes. So for example, our genes (which store information in a "timeless" fashion), throw enzymes into the mix to control the rate of some metabolic reaction, some self-organising dissipative process.

This genetic information does not develop (it stands apart from the usual molecular wear and tear) but it does evolve - there is a process for mixing up the information every so often and trying out some new combination.


This makes sense to me. Life does have the ability to control ongoing entropy-increasing processes and coordinate them with other processes. This is certainly a basic and distinctive “functionality” pertaining to nearly all life-forms (though not perhaps to viruses and prions). And as a result, organisms get to be "finely-tuned" in a great many different respects at once (like our universe), unlike other dissipative systems.

But the point of a functional explanation is not just to come up with a general description that fits nearly all known cases... though that’s obviously a very useful thing! The goal would be to understand how and why this happens. And in biology we do understand where this “rate-independent information” comes from and how and why it evolves. That’s all based on what I think of as the “key functionality” of self-replication.

So “abios” refers to all the random processes that can occur in physical systems as they “slide down the thermodynamic gradient”... And “bios” refers to all those same physical processes, to the extent that they’ve been selected and replicated as in some way useful to the process of self-replication.
 
  • #31
Fra said:
I guess you are hinting that just - MAYBE our preconception that nautre "obeys laws" etc, and thus that there must be some underlying formal system from where all can be derived - is wrong...

...could the QUEST for such "compactified" understandings in of finding formal reductions still be RATIONAL? What is it's utility?


Well, I don’t think it’s wrong to think that nature “obeys laws” – clearly it does. But you’re right, I think this ability to be lawful must be based on something deeper.

The posts by friend and oldman above reflect a sense that the lawfulness of nature and the mathematical self-consistency of those laws almost have to be the basic explanation for things. As you know, this point of view goes back to the beginnings of philosophy and science. And it is such a remarkable idea, that “the Logos steers all things through all,” as Heraclitus says – and it fits so well the way we philosophers and scientists like to think.

And the quest to uncover underlying laws clearly has tremendous utility, since it gave us science. I think the search for “unification” that led to the Standard Model was the great intellectual accomplishment of the 20th century. But I suspect we now need a different strategy.


But I think what you’re suggesting is that in nature itself there is some “utility” to things “obeying laws” – i.e. having all this random interaction “reduce” to conformity with a relatively small set of relatively simple formal principles. I certainly agree. The problem is describing the underlying “functionality” in terms of which the laws are useful.

I’ve been intrigued by your suggestion that we might find a model for this in the inference process by which scientists “reduce” the welter of phenomena to a relatively compact set of laws. It’s a guessing game in which guesses are tested against specific cases to improve the guess, and where a key part of the game is trading information about which guesses work. But I haven't yet seen where to find that kind of process in physics.

My own guess is that the measurement process is the “key functionality” that makes laws useful. For one thing, it involves literally all observable phenomena, and so all of physics. For another, QM gives us very strong indications that things are “real” and determinate and lawful only to the extent they are measured. And for another, the very difficulty of the question of what constitutes a physical “measurement” points toward a type of anaylsis that seems to me very new and promising.

What I have in mind is that every physical parameter, system, law or event gets observed through its effect on a different kind of parameter, system, etc. Physics has focused on isolating systems and parameters to study them separately – giving us a huge amount of excellent information – and then looks for ways to “reduce” or “unify” these descriptions, which has also worked very well, up to a point.

But now we may need to ask a different type of question, about the role each physical parameter, each type of field or particle plays in making other parameters and other kinds of systems observable. In other words, the question about measurement suggests the need to understand the relationships between the different kinds of forces, etc., so that they provide a context for measuring each other. If the world were a single, simple mathematical pattern, that might be lovely, but how would it be observable?

We take being “observable” for granted – we assume that if there’s something there in the world, then of course there must be some way to measure it. But there’s no logic to that. In fact, measuring any specific type of physical information requires other specific types of information to be known. So what kind of information-structure is this, that can measure all its own parameters by means of other parameters?

And can we imagine simpler kinds of systems that can do this “trick”... out of which our universe might perhaps have evolved?
 
  • #32
ConradDJ said:
I think you have this backward. I don't think anyone has seriously suggested that DNA molecules somehow appeared along with all the molecular mechanism needed to replicate them... and then self-replicating systems emerged.

More a case of you and I having crossed wires of communication, than looking at things backward. What you say about DNA not appearing fully-blown, as it were, is of course correct. I'm no creationist, and think that your follow-up is a very reasonable shot at telling the beginning of the story. But I prefer discussing histories that don't begin with guess-work.

So this (may be) the beginning of the story... and DNA comes many chapters later, as a highly stable storage mechanism for encoded protein-building information.

Yes indeed. But this whole self replication story is about a clever trick of nature's (even if DNA took billions of years of mysteious stereochemistry to perfect). It's a trick of the self-promoting kind, akin to fluvial erosion, safe-cracking and sex; once it happens it tends to happen again because nothing succeeds like success. Perhaps biologists should recognise the generic type, rather than the particular case.

... But the point of this story is that the basis of life is a specific and very special "functionality" -- something life does, not the physical mechanisms any particular life-form evolves to do that.

Are you saying here that life 'does the reproduction dance' rather than act as an agent for reproducing DNA? You then disagree with that polemic biologist, Richard Dawkins? Not that this is a bad thing, of course --- he is very strident. Your invention, "functionality", is I think too unspecific to separate such possibilities.
 
  • #33
oldman said:
But can you give frustrated physicists hope by giving specific examples of how 'local-global interactions' actually work? Better still, has this approach succeeded in predicting observable phenomena? Again, examples would help.

You are right that the approach I suggest is more philosophy than science in its current stage of development. But one of the "observables" it predicts (which other approaches don't predict) is precisely that a complete model of a system would throw up a tale of the local constructive substances (ie: QM) and the globally constraining form (ie: relativity). And that furthermore, their interaction over all scales would result in a powerlaw outcome (ie: renormalisation).

Anyway, some familiar examples of physical systems with a local~global logic. A bar magnet (local dipoles, global magnetic field). A Rayleigh-Bénard convection cell (local thermal jostling, globally organising convection currents).

Now these are only examples of partially self-organising systems. A bar magnet does not create its own dipoles, just aligns them. A Benard cell does not create the boundary constraint that is the sides of its heated metal pan, it just responds to their given existence. But it is very difficult to find everyday physical examples of what I am talking about - a totally bootstrapping self-organisation - because the everyday world is already so full of strong physical constraint.

The physical world looks like it is made of solid stuff (atoms, matter) and it takes a lot of energy to melt that state of strong self-organisation. It takes an LHC or a quantum level experiment to melt the familiar world of classical local matter and classical global laws and so discover that what we see has self-organised in local~global fashion.

But biology and other "soft" sciences are a better place to see a systems logic at work because - due to their exploitation of entropy gradients - they do have still degrees of freedom that can be shaped, informational constraints that can be developed.

So take an example like a body organ, it has some global function - a purpose. And it is composed of cells that could be many different type of cells (they originally had the pleni-potential of stem cells) but they have become shaped by a particular organ's purpose. So a liver is made of liver cells, not heart cells.

Or you could take other examples such as the way the receptive fields of neurons are shaped by a prevailing state of attention/anticipation.

But you would still be correct that all this is still more about explaining what is already observed than predicting what will be observed.

There are some proto-mathematical tools - hierarchy theory, fractal geometry, complex adaptive systems, scalefree networks, constructal theory, generative neural networks, dissipative structure theory, etc, that "talk around the subject". But it is far from a "shut up and calculate" level of development.

However, that is also why it is an "opportunity". Either physics is so close to unifying QM and GR that just another little push with one of its 50 or 60 varieties of GR-reduction and it will all click into place. Or there is actually a reason why nature resists such a collapse and so room to consider other ways of framing the task.
 
  • #34
oldman said:
Yes indeed. But this whole self replication story is about a clever trick of nature's (even if DNA took billions of years of mysteious stereochemistry to perfect). It's a trick of the self-promoting kind, akin to fluvial erosion, safe-cracking and sex; once it happens it tends to happen again because nothing succeeds like success. Perhaps biologists should recognise the generic type, rather than the particular case.

Yes and what is the generic case here?

The reason for the "unreasonable effectiveness" of DNA molecules - and also, serial human speech - is a constraint of dimensionality, a global constraint of local degrees of freedom.

The key to bios is an ability to store rate-independent information about rate-dependent processes. You have to have some kind of memory mechanism that stands apart from the usual thermodynamic fray, so as to be able to harness these very same dissipative processes.

And nature does this via the addition of further constraints. A DNA molecule is not 3D like a protein molecule, or even 2D like a membrane (membranes are used in cells to constrain reaction dynamics of course - a film has a different rate than a volume). It is a reduction of a structure to 1D, which in turn allows a further constraint to the 0D of a point - or in DNA's case, a codon.

The only thing that matters to a codon is its place in a serial sequence. The other directions of space are frozen out and don't exist so far as the coding structure of DNA is concerned (and even the dimension of time, because DNA is by far the most robust kind of molecule, other cellular molecules, even structural ones like microtubules, can half-lives measured in seconds).

So nature uses constraint over dimensionality all over the place to harness dynamics - cells use membrane, pores, and all sorts of other physical constraints. But the really big trick was a result of the most extreme possible dimensional reduction - that to a serial code which put the information as far away as physically possible from the real world of dissipative process...so as to be able to turn around and control those processes by imposing yet further boundary constraints on them (in the form of enzymes, etc).

And nature discovered this trick at least twice. So as well as DNA as a serial coding mechanism, humans also evolved serial speech. A limitation on vocalisation (the ability to articulate only a single phoneme at a time) became also the constraint that unlocked the coding potential of human language - a new kind of DNA to undepin socio-cultural evolution.

All this seems a long way from physics. But in fact it is the physics - a generic model based on the notion of constraints on degrees of freedom - of modern theoretical biology.

So again, if we follow Schrodinger's advice in What is Life, then biology really can offer a broader view of how the world works.
 
  • #35
apeiron said:
...Either physics is so close to unifying QM and GR that just another little push with one of its 50 or 60 varieties of GR-reduction and it will all click into place. Or there is actually a reason why nature resists such a collapse and so room to consider other ways of framing the task.

Thanks for these two discursive and illuminating replies, Apieron. They provide lots of food for thought. A comment: The reason why 'nature resists such a collapse' may simply be that we're not smart enough to do the job; I do hope this is not so; we may not have fully exploited one skill we excel at --- a facility for recognising patterns. We should use all we've got.

Such as your noting that: "nature uses constraint over dimensionality all over the place to harness dynamics - cells use membrane, pores, and all sorts of other physical constraints." I would call this one of nature's tricks; a trick being something surprising in both outcome and underlying simplicity with some tricks being more effective than others. It seems to me that recognising effective tricks, or a class of effective tricks, like those that are self-promoting, may help us to understand nature better.

Here's an example of a trick that involves 'dimensional reduction' and is also 'self promoting'. You're probably aware of it:

Three-dimensional crystals grow at surprisingly low supersaturations because their translational symmetry is in practice hardly ever perfect. A one-dimensional linear defect can convert a three-dimensional lattice into a two-dimensional spiral ramp (like a multi-level parking garage). A surface intersected by this defect then becomes a self promoting site for growth at theoretically impossible low supersaturations. Perhaps this trick has 'global' (the lattice) as well as 'local' (the defect) aspects as well, and could be called a local-global trick.

The point I'm trying to make is that nature, with its huge bag of tricks, seems to be much smarter than we are. Even the clever fellow who recognised this trick (Charles Frank) didn't fully unravel the almost biological complexities that such defects can create in crystals. Makes one wonder about the potential complexities of defects in the now-being-considered symmetries of fundamental physics.
 
  • #36
oldman said:
Are you saying here that life 'does the reproduction dance' rather than act as an agent for reproducing DNA? You then disagree with that polemic biologist, Richard Dawkins? Not that this is a bad thing, of course --- he is very strident. Your invention, "functionality", is I think too unspecific to separate such possibilities.

Dawkins may be pretty dumb about some things, judging on hearsay about a recent book of his that takes on religion. But the "selfish gene" thing is good, and I especially like his little book River out of Eden as a reminder how evolution works and how powerful this self-replication business is.

"Functionality" is purposely un-specific, because I'm more interested in raising the question about what the basic functionality is, than coming up with a definitive answer. Even in biology where we know the thing quite well, conceptually, evolution is complicated and gets more and more so over time. So while it's accurate to say it's essentially all about things making copies that make copies... really what "functionality" points to here is whatever it is that evolves, so that it can keep on evolving.

In case it's not obvious, I'm using the term in the sense of the functionality of a button on your computer screen, or of a piece of software or hardware --i.e. a description of what it does, what it's good for.

In the case of physics, I think that to describe the "basic functionality" as measurement, or observation, or the communication of information, comes close. But none of these terms are really well-defined yet, in physics, though we know what they mean well enough in daily life. But I'm not trying to give a precise definition, at this point. First we need to get a feel for what's going on with this business of determining information through interaction that then gets passed on as part of the context for determining other information, and so on.

I'm thinking that eventually we may be able to picture this process as the kind of thing that can evolve, just as we can picture the evolution of self-reproduction in biology. Then we'll be in a better position to describe just what's needed to make this work.
 
  • #37
I have feeling that I'm unable to convey what I mean. Which is a bit weird because i think your idea of the measurement process is not far from it.

What I describe is both a deeper idea of things, that's somehow abstract which is why I think it's hard to convey, but it's also very simple.

ConradDJ said:
The problem is describing the underlying “functionality” in terms of which the laws are useful.

My suggestions is inference. The functionality is inference. What we need to describe is the physics of inference.

An physical interaction process = measurement process, is nothing but an inference process.

Expectation -induces-> action; backreaction -induces-> revision of expectation, which takes place at two levels; information preserving revision and non-information preserving revision.

It's selfpreservation of the inference systems that causes equilbrium points to evolve. The "logos" of the interactions are then nothing but equilibrium strategies. They are stable because no observer/subsystem benefits from changing theirs. This is how I envision that we will explain the SM for example.

What I'm suggesting that each physical system is loosely speaking one-2-one with a particular inference system. This inference systems has predictable interaction properties as we can predict how it ratioanally would respond to input. But the predictions are not deductive, their and inductions, again only inferences, that further determines the action of the physical systems where the "logos" lives.

It's a picture that may seems circular and chasing itself, but that's why evolution is part of the expectations. There is no objective or global equilibrium. This IS a game, and all "predictions" and testing of predictions come in the form of "play the game". This is the deeper insight I advocate and is why I think we should study evolving and interacting inference systems.

But this is a generalization of the inference we have in QM. I'd say the inference we have in QM, is a subset of the more general case. Although still a generalization as compared to classical inference (thermodynamics).

The exact mathematics to use for this is still unsolved, but I insist that the key idea is very simple. The problem is somehow that we seek OTOH a "background" independent inference model (because most inference models do have such backgrounds) but then again, the way to do it is NOT to REMOVE the background, it's instead of understand how the background evolves, and with it the new inference system.

/Fredrik
 
  • #38
The obvious question one can have is: Ok, I have my "pet-model", that is that everything is inference. Why is this better than all other "pet-ideas" like the idea that "everything is geometry" or everything is just abstract algebras etc.

I think the difference is that inference models is the perfect match with science. Science is essentially an inference process itself. You know the old debates of the problem of induction and Poppers resistance against this description. Not to mention that any learning process is nicely abstracted as evolving inference systems.

I think aside with this, the ideas like "manifolds" "geometry" really seems almost pre-historic remnants of the realist history of human science.

/Fredrik
 
  • #39
Fra said:
The functionality is inference. What we need to describe is the physics of inference... A physical interaction process = measurement process, is nothing but an inference process.

Expectation -induces-> action; backreaction -induces-> revision of expectation, which takes place at two levels; information preserving revision and non-information preserving revision.

It's selfpreservation of the inference systems that causes equilbrium points to evolve. The "logos" of the interactions are then nothing but equilibrium strategies. They are stable because no observer/subsystem benefits from changing theirs...

The exact mathematics to use for this is still unsolved, but I insist that the key idea is very simple.

Hi Fredrik –

My sense is that you may well be on the track of something important. As in the “Inductive Inference” thread currently running – there seem to be several lines of research that identify the basic structure of QM with the logic of inference. And I very much agree with your idea that we need an evolutionary understanding of basic structures, instead of just assuming them in the background of the theory.

But, I think your idea is probably too simple... I think you may be falling into the trap of hoping for a single, simple mathematical principle that explains the whole show.

Specifically, I’ve been trying to show that measurement is not “nothing but an inference process.” As you’re well aware, inference needs to work on incoming information, and there is no information without a context of other information – in fact, a context of other kinds of information -- that let it mean something. My emphasis is on the way the different elements in the structure of physics work to make each other meaningful and communicable.

So the logic of inference may well play an important role here, but it’s not the whole picture. Unfortunately “the whole picture” is very difficult to assemble, in physics. The methodology of formal / mathematical analysis strongly tends to isolate one element – there is still the Platonic goal that we will be able to derive everything from One idea.

That goal was in a sense reached, by Darwin’s insight in biology. But even the simplest self-replicating systems may well have involved several types of molecules interacting in many different ways. What’s simple in biology was never the specific structures or processes, but the overall “thing” that they all accomplish, working together, i.e. self-replication.

You describe a system of interacting “observers” reaching an equilibrium, which can then be disturbed by new data, which might then require a new equilibrium, and so on. The system would evolve by requiring the schema shared by the observers to become more complex, so as to be able to make sense of more of the data. But again, this presumes a mechanism for defining each observer’s data-schema and comparing it with others – a kind of “language”, in effect. That not only shares data but (separately?) shares the schemata.

A parallel in biology might be a species reaching a state in which it’s well-adapted to its environment... but then the environment changes (maybe due to another species adapting to it), so a new “equilibrium” is needed. This is surely an important process in evolution, but it depends on the self-reproduction of each species, which is not basically an equilibration-process. (Though it does involve multiple processes of equilibration both internally and with the environment.)

So again, I imagine your inference-model may be an important part of the “functionality” we need to understand – but it doesn’t look to me like a whole picture, yet.

Thanks for your notes -- Conrad
 
  • #40
Conrad, I see your objections but I still think that this fits into the inference picture. But as I've tried to stress several times, I'm not talking about just probabilistic induction - this is a special case of inference. I'm considering en EVOLVING general inference.

I'll try to be brief for clarity:

I do not have an illusion to find a simple mathematical principle from which all follows. This should be clear from the type of reasoning I constantly insist on. There are not 100% confident premises and no deductions. This of course applies even to me - I do not have this illusion is a simple one-line TOE from which everything follows in an instant. What's simple is more the evolutionary mechanism, which is pretty much darwins BUT I have a slighly more SPECIFIC suggestion, in terms of general inference.

It seems you think that natural evolution and selection in biology is a good example of whose analogy we seek - right? But as I think you also agree, evolution is NOT just variation and selection on DNA, because obviously evolution started way before the first stable DNA or RNA was on Earth right? To just consider evolution of DNA sequences in the space of all possible DNA is to think that there is a fundamental level on which to apply optimations, but this is a simplificaiton.

So here's my analogy.

The "DNA" of the inference systems is coded in each physical system (each observer) and gives rise to the rules that yield expectations of the future knowledge, based on the present knowledge. This expectation determines the action of this physical system, as a form of generalized "diffusion" (but over non-commutative discrete structures, which si exactly why it ISN'T normal diffusion, and the reason why we need a NEW mathematical model for this inference, the probability based does not work. But borrowing the words helps since the original intent is the same.).

Each inference systems, thus acts and gets reacted on, with it's environment. This puts selective pressure on evolving the FIT inference systems, that are self-preserving.

To described the process of how the inference systems for example handles an inconsistent feedback is at two levels, sometimes a simple state revision helps, sometimes there is no consistent correction and then this implies breakdown of the inference system, because in my view the inference systems is like a steady state structure, unless it's continously supported it decays.

This picture thus contains small variations, and selection.

"Reproduction" can be pictures in a more sophisticated way - by induction of a environment hospitable for "similar" inference systems. So the reproduction mechanism is then simply to provide the right breeding grounds for "similar thinkers".

Then I think you say that this is still too simple, because ALL of what I say still requires a context. Yes, but this is why even THIS description lives within another inference systems! In this case it happens to be my brain, but obviously I am not an expcetion. I'm constrained to the same principles as is an atom.

What I'm saying is the equivalent to that each physical system has a "model of reality" encoded in their internal state and structure. IT's just that for simple things like atoms, this is of course not near the complexity of my brain, but that's why the action of an atom also is MUCH simpler than the action of a human.

I'm not saying that the mathematics for this is worked out, but I don't see how your concerns are not taken into account. The prediction of this, is not a simple thing form which all follwos. However, if we look for the smallest DNA of physical law that we humans can distinguish, then looking at the subatomic systems or Planck regimes seems like the way to go.

I do not think it's unreasonable to think that maybe we can make inferences about this, that may yield us first principles undertandings for example why we have 3 dimensions, why we have some values of the masses and parameters of the SM.

I think the reason why ST just gives as a gigantic landscape is because they fail to connect the excitaitons to the evolving background. The analogy of that in my view, is to understand how the inference system evolves in response to the state. LEt's just say I have more specific but very immature ideas on this, but the overall idea I think seesm to be in your direction.

/Fredrik
 
  • #41
ConradDJ said:
That goal was in a sense reached, by Darwin’s insight in biology. But even the simplest self-replicating systems may well have involved several types of molecules interacting in many different ways. What’s simple in biology was never the specific structures or processes, but the overall “thing” that they all accomplish, working together, i.e. self-replication.

Again, evolution is only half the story. You need development as well. There is replication, but also metabolism.

If we are talking analogies, this does fit a decoherence approach to QM. A QM wavefunction freely develops some indeterminate potential, and the environment imposes constraints that collapses the wavefunction, selects some actual outcome. There is an evo-devo story there.
 
  • #42
The function of the universe is to make actions stationary just like it is a falling body's telos to be at the center of the earth. It doesn't seem to me you're adding any new information by describing things in terms of functionality, but returning to a teleological view that is unnecessary. With evolution, the functionality is a product of reality - only the fittest replicators for survival will survive. It's tautological. We only see functionality because we are viewing the whole history from one end. Maybe a functional perspective could be a good one to look for from a psychological point of view, that could lead to progress by changing our viewpoint, but it doesn't seem to me like such a thing could ever be fundamental. Or that we would ever be justified in calling it fundamental.
 
  • #43
dpackard said:
It doesn't seem to me you're adding any new information by describing things in terms of functionality, but returning to a teleological view that is unnecessary. With evolution, the functionality is a product of reality - only the fittest replicators for survival will survive. It's tautological.

It's true that in general, when we talk about the function of something, we're thinking in terms of teleology, i.e. something that has a purpose. But evolution is an exception; it doesn't have any extrinsic purpose, it's not happening for any reason. It's just random accident. And you're right that "survival of the fittest" only means the survival of those that happen to survive.

But the point is that random accident almost never gives rise to structures anywhere near as complex and finely-tuned as an organism. This happens only when a very unique type of "functionality" happens to get started -- as self-replication happened to get started Earth. Once you have entities that can make copies of themselves and more or less successfully replicate variations, then mere random accident can eventually generate all the complexity of life.

The suggestion I'm exploring is that there is another such unique functionality at the basis of physics. I'm thinking that the things we tend to take for granted about physical reality -- e.g. that things have determinate characteristics, and "obey laws", and that the lawful interaction between things communicates observable information -- constitute this other special functionality. We don't have a word for it, just because we've always taken it for granted, but we might call it a kind of "self-determination" -- i.e. something like the ability of an system of interactions to define its own characteristics to itself.

The idea is that, like self-replication, this functionality of "self-communication" is such that once it gets started, then merely through random accidents of succeeding and failing it can evolve more and more complex ways to do this thing... eventually producing a very finely-tuned system of many types of entities and interactions, in which everything both "measures" and "is measured by" everything else.

So you're right that just to take whatever happens in physics and call it "functional" would add no new information. On the other hand, my argument is that the pervasive fine-tuning of our universe is at least prima facie evidence that there may be something special going on here, similar to the special functionality at the basis of life.
 
  • #44
Fra said:
The "DNA" of the inference systems is coded in each physical system (each observer) and gives rise to the rules that yield expectations of the future knowledge, based on the present knowledge. This expectation determines the action of this physical system...

Each inference system thus acts and gets reacted on, with its environment. This puts selective pressure on evolving the FIT inference systems, that are self-preserving...

This picture thus contains small variations, and selection...

"Reproduction" can be pictured in a more sophisticated way - by induction of a environment hospitable for "similar" inference systems. So the reproduction mechanism is then simply to provide the right breeding grounds for "similar thinkers".

What I'm saying is the equivalent to that each physical system has a "model of reality" encoded in their internal state and structure.


Fredrik – Apologies, I take it back about your scheme being “too simple”!

Maybe it’s just because you’re speaking my language here, but your last post gives me a much fuller picture of what you have in mind. It does seem as though you use the concept of “inference” more or less the way I use “communication” or “measurement”. That is, it refers not just to the specific act of “making a guess” based on certain data, but to the whole physical context that contains the data and gives feedback on the success or failure of the guess, and also relates the guesses of different inference systems to each other.

So I can see that if we could find a way to translate this into physics – how “expectations” are encoded in the properties of systems, how they get expressed in a system’s “actions”, what the feedback is and how it gets incorporated into the inference-system (or else kills it off), and how the ”model realities” of different systems affect each other – then it could be the sort of special functionality I discussed in the previous post, that supports a genuine evolutionary process.

To put this into mathematics – is there a way to specify logically what the elements of such a system would need to be? I imagine something like a flow-chart that pictures both what happens internally within each inference-system to test the model against data and to generate actions, and also what the external linkages would need to be. Does the “data” for each system consist simply of the “actions” of the other systems?

I know this is work-in-progress and I don’t mean to press you for answers... I’m just wondering, seeing if I can get the picture you're imagining.
 
  • #45
Conrad, now we are more in tune!
ConradDJ said:
it refers not just to the specific act of “making a guess” based on certain data, but to the whole physical context that contains the data and gives feedback on the success or failure of the guess, and also relates the guesses of different inference systems to each other.
Yes. The logic of "making an isolated guess" is one component only of the total picture. It's the problem of placing bets optimally.
ConradDJ said:
So I can see that if we could find a way to translate this into physics – how “expectations” are encoded in the properties of systems, how they get expressed in a system’s “actions”, what the feedback is and how it gets incorporated into the inference-system (or else kills it off), and how the ”model realities” of different systems affect each other – then it could be the sort of special functionality I discussed in the previous post, that supports a genuine evolutionary process.
Yes, this is exactly what I mean.
ConradDJ said:
To put this into mathematics – is there a way to specify logically what the elements of such a system would need to be? I imagine something like a flow-chart that pictures both what happens internally within each inference-system to test the model against data and to generate actions, and also what the external linkages would need to be. Does the “data” for each system consist simply of the “actions” of the other systems?
As it's clear that this is an open issue and I don't have any final or complete answers, I can go ahead and give some ideas just for the discussion w/o misunderstandings.

ConradDJ said:
is there a way to specify logically what the elements of such a system would need to be? I imagine something like a flow-chart that pictures both what happens internally within each inference-system to test the model against data and to generate actions, and also what the external linkages would need to be.

I have some of my own thinking here but it would be wrong to call them "logically unavoidable" since by consistency of reasoning here we see that even the description of the evolving inference systems itself is subject to the same constraints and is itself evolving too. So all *I* have is an expectation of this "logic".

My abstraction is composed of some components

* A finite set of distinguishable events that we can think of as indexing the expected types of events that the observer can distinguish. I picture this as a finite index, which we can consider one-2-one with the set of bounded integer. The bound represents the bandwidth of the horizon.

* An internal structure (in general a composite structure of sets of sets, which defines flows between the sets, that can be interpreted also as a non-commutative information space). This represents the observers "memory of the history of events", but it's not optimized to "store time histories", rather the information that initially arrivs as streams, are processed and optimized for the benefit of the expected future. Ie. we "remember" what we think is useful for the future, the rest is discarded. which is pretty much like we believe the human brain works. The human brain is not optimized to remeber correct historical sequences. Our memories instead have the purpose of helping us navigate into the future. Only some disorders of the brain cause humans to have extremelty good detalied memory, such as amazing photohtaphic memory etc. But then of course these disorders have other side effects, that has to do with forseeing the future, social interactions etc. I picture this as a finite set of sets of event histories. Where information can flow from one set to the other, and this is defined by various datacompression methods. I imagine a total bound of the set of sets, so that there is a complexity bound on the total memory system. This represents the information capacity.

* Implicit in this internal struture, is internal flows that are entropic in nature and are defined by the relations between the sets. This defines rataional expectation and natural actions, that are simply entropic to it's nature. But the different between simple thermodynamics is that we have here non-commutative structures - this means that more sophistication and in particular CYCLIC (an not just dissipative) flows appear! I'm working on mapping out these expected flows (in this language the DETERMNISTIC evolution of QM) will be replaced by the observing systems EXPECTED evolution of the image of it's environment. Now, these two things conincide in situations where the expectations have reached and equilibrium and the expectations of the future stay consistent with the actual future so that a revision of just the state of the structure and not the entire inference backbone is needed.

There are many compnents in this that are still troublesome. The above is still just a "snapshot" of the entire story. In particular are the index sets, the internal structre and the transformations, as well as the SIZE(bandwith) of the index, and the SIZE (capacity) of the internal memory structure also evolving! To determine their values we get even more self-references. But the size and evolution of these measures I associate to the problem of the origin of mass and inertia. The SIZE of the microstructure (information capacity) is a measure of the intertia or "mass" of the memory strcture. And to understand how an inference system can gain or loose this inertia is one of the KEYS to understand the origin of gravity in this picture. It's hard to explain shorlty but this entire picture actually predicts that TWO inference systems, does "attract" in a way that depends on their inertia and in the sense that their "information divergence" decreases the more they interact. Similary the resistance against change, when exposed to conflicting information is also directly related to thsi complexity. Anyway, I'm working slowly on some of these things, and the expressions for the expectations and calculating say "probabilites" for different future gets very involved, but they are essentialyl combinatorical expressions. But in the continuum limit or commutative cass they have interesting similarites to transition amplitues that goes like e^-M*SKL there M is the "size" of the conflicting informationm and SKL information divergence. But when the structures are non-commutative (which they of course are in all but trivial cases) things get more hairy and the problem of counting the set of possible transformations is still something that's bugging me.

The problem is that all this ideas, remain completely incoherent to anyone not tuned in on the thinking. That's why I have to take this to the next level and make contact to some of the current physcs and at least come up with some predictions in order for anyone to even care.

But this is of course just like any othre programs, say ST. Except that are big programs, with ALOT of people working on it. As far as I know there aren't much "communities" working in this direction.

ConradDJ said:
Does the “data” for each system consist simply of the “actions” of the other systems?
Yes, this a decent way ot putting it. Also "the other systems" from the inside point of view, simply form "the environment". So the systems data is the actions (including the environments backreations from the systems own actions) are originating the "boundary" to the unkonwn. Which I mahtematically picture as an indexed horizon the defines the boundary between the obsevers knowledge and new input. Where each index represents and distinguishable event. From the inside the environment does not have a definite structure. All we have is a communication channel, and the already present expectation that indirectly contiains "an image" of the environment. But still this is nothing but an expectation of the environment. Which for example contains strange things such as superpositions etc.

/Fredrik
 
  • #46
Fredrik – there are many interesting details in your post... I wish I were better equipped to get into the specifics. But I see that you’ve made a lot of progress with a difficult question – how to define what’s minimally necessary for the evolutionary process you have in mind.

I think what you take as fundamental about the physical world is its lawfulness. Not that things are lawful a priori – but it’s only insofar as they do act in accordance with certain de facto laws that there can be any predictability or coherent communication between them.

So you envision a lawful universe gradually coming into being as a result of many different interacting participants, each trying to interpret its own environment as lawful, and acting in accordance with that interpretation... which contributes to the lawfulness of the environment for the other participants.

To make this work, you need to assume certain minimal structure is just given – i.e. the sets of distinguishable events, and information flows between them, etc. Of course this is the same procedure as other theories – you set up a mathematical model based on a certain elementary structure, and see if something like actual physics can get generated out of that. In your case the basic structure is fairly complicated, because – like me – you expect that the basic structures actually have to do something – i.e. to generate and test hypotheses about what the “laws” are, and to express those laws in “actions” that can be interpreted by others.

My own guess is that the “lawfulness” of physics evolves in response to something more basic. If we can assume there exists some kind of definite information in the world – e.g. “distinguishable events” that can be indexed and ordered in some way – then I can see that the key issue would be how to generate non-random orderings.

But to me the primary lesson of QM is that the existence of definite information can’t be taken for granted. Nothing is “determinate” in physics that isn’t actually determined in a measurement-interaction – which always involves other kinds of interactions determining other types of information. So physics is not only about evolving toward an “equilibrium” in which everything is predictable according to some set of laws. More basically, the laws have to support the different kinds of interaction-contexts in which physical information can be defined and communicated.

I think you might say – measurement is also an inference-process. It’s not just a matter of inferring the “laws” that make data-streams predictable... it’s also a matter of inferring the contingent “reality” that the data-streams are telling us about. But this inference still needs some sort of definite observable data to work with... and that for me is the key issue. What we observe may be partly lawful, partly random, but in either case it’s determinate, in some degree. So where does this observable definiteness come from? How is it physically accomplished?

This idea is far less developed than yours, so far as mathematical modeling goes. But then my personal goal isn’t to create a new theory. What I would love to be able to do is to show how each of the various structures described by existing theory contributes to making a world of observable information.

We know that whatever the physical world may be, it does in fact succeed in making a huge amount of information observable. QM even suggests there is no information in the world that is not actually “observed” in some way by something. But it’s very difficult not to take the existence of observable information for granted... to see it as something remarkable that the specific, finely-tuned physics of our universe makes possible.
 
  • #47
Hello Conrad, I agree there are so many things to discuss here and it makes it hard. I'll comment more later but just a reaction on your first paragraphs.

ConradDJ said:
I think what you take as fundamental about the physical world is its lawfulness. Not that things are lawful a priori – but it’s only insofar as they do act in accordance with certain de facto laws that there can be any predictability or coherent communication between them.

We may enter the grammar here but I'm not native english speaking and I am not sure I clearly understand your distinction between "lawfulness" and "beeing lawful".

But let me say this: I certainly do not assume that there are forcing laws on the observer/inference systems. Neither do I assume that there are forcing logic in the inference system FORCING them to behave "rationally". There is still an uncertainty here.

It's howver my conjecture that it's the best possible guess to assume rationality. This does NOT mean that I think that rationality is forcing. This is a big difference for me. Perfect rationality does not even exist as there is no certain way to decide what is rational and what's not. The entire construction containts natural uncertainty, this also causes a small, but important "variation", that's important in the big evolutionary picture.

I call it a conjecture but in a certain perspective it almost follows unavoidable, since by construction a "rational action" is the only self-preserving action. Thus, in certain context I think one can even almost "prove" that irrationally acting inference system are not stable, and thus aren't observed in nature - except possibly really OFF equilibrium (think big bang or something similar).

So in my view, the fact that we EXPECT rationality BUT it's NOT PERFECT is even a key feature.

Ok.. this was just my reaction on your "lawfulness" vs "beeing lawful" but maybe I got you wrong. If so please explain. I like to think I've got a "decent" english but sometimes it's obvious that it's not my native language.

(I haven't raedh the rest of your reply yet.. mor lateR)

Edit: One can even summarize my conjecture in the very plausible conjecture or axiom that the only rational conjecture, is to assume rationality in unknown inference systems. However rational action does NOT mean deductive certainty. It's not what it means. It just means rationally placing your bets and play the game. Some people objected to this by referring to rational theory in economics and arguing that not all market players are rational, but that objection is based on a misunderstanding of what I suggest. Rationality they speak of is not of intrinsic kind.

/Fredrik
 
Last edited:
  • #48
God morning,
ConradDJ said:
So you envision a lawful universe gradually coming into being as a result of many different interacting participants, each trying to interpret its own environment as lawful, and acting in accordance with that interpretation... which contributes to the lawfulness of the environment for the other participants.

When I compare this phrase with what I mean I think I see what you mean. I think the word lawful could be replace by "rational" and it is like how I would gave phrased it. Lawful makes me associate to deductive logic, but that's now what I mean. I mean rational and "lawful" in the general sort of inductive sense.

The key point in view is that the REASON for this "rationality" is not it's logical necessity in the deductive sense but rather it's necessity in the sense that it's the only constructive or self-preserving way. Since this is hard to prove formally, although I see it as aslmost obvious, I like to call this a conjecture or axiom.

If we step back and ask what our or any observers basic task is?

Is the task to predict the future given the present? NO, not quite.

The basic task is, howdo we act in a situation where we in fact to not know the future, but only have incomplete guesses? Here my conjecture of rational action comes in. This is idea, is what I am trying to formalize and translate to mathematical inference models and then ultimately to connect to physical interactions.
ConradDJ said:
To make this work, you need to assume certain minimal structure is just given – i.e. the sets of distinguishable events, and information flows between them, etc. Of course this is the same procedure as other theories – you set up a mathematical model based on a certain elementary structure, and see if something like actual physics can get generated out of that. In your case the basic structure is fairly complicated, because – like me – you expect that the basic structures actually have to do something – i.e. to generate and test hypotheses about what the “laws” are, and to express those laws in “actions” that can be interpreted by others.
Yes, that sounds in tune! I need certain structure, BUT this structure is ALSO evolving, which is why I called the previous description just a snapshot.

Your also right that the importany point is not just to "predict the future", the more basic point is "what actions to take" given a certain expectation of the future. Also the important question is not just to falsify or cooroborate a theory - the really important point is howto EVOLVE the theory.

This is a basic trait of a learning model. To just fire a statement and evaluate it as true or false is a trivial matter. The deep part is how the statement was generated, and how the feedback of evalutations evolve the next guess. Anything that doesn't care about that detail in the "scientific model" is just sterile an unable to intelligent development. Ultimately I picture this in connection to "survuval" and fitness of the inference systems. A system that fails to rationally revise it's opinon as new information arrives, simply gets ripped apart by it's environment.

Again, this idea is what I try to formalize, and turn eventually into a new framework for interactions and phenomenology in physics. But one has to be fair and say it's a massive task unfortunately, but I find it so plausible that it's nevertheless - irresistable to and even irresponsible no to - try.

/Fredrik
 
Last edited:
  • #49
Fra said:
The basic task is, how do we act in a situation where we in fact do not know the future, but only have incomplete guesses? Here my conjecture of rational action comes in...

...the important point is not just to predict the future , the more basic point is what actions to take given a certain expectation of the future. Also the important question is not just to falsify or corroborate a theory - the really important point is how to EVOLVE the theory.

This is a basic trait of a learning model. To just fire a statement and evaluate it as true or false is a trivial matter. The deep part is how the statement was generated, and how the feedback of evaluations evolve the next guess... Ultimately I picture this in connection to survival and fitness of the inference systems. A system that fails to rationally revise its opinion as new information arrives, simply gets ripped apart by its environment.
Fra said:
One can even summarize my conjecture in the very plausible conjecture or axiom that the only rational conjecture, is to assume rationality in unknown inference systems.


Thanks, Fredrik. Here’s the picture I’m getting from you so far –

A population of inference-systems evolves toward collective “rationality” because to the extent that each system acts “rationally”, they create an environment for each other in which each system can better succeed in behaving “rationally”. Given that what it means to be “rational” is not given a priori but discovered or invented over time, as the population evolves.

This works because in order for any system to compute its own rational behavior, it has to assume rational action on the part of the others. Understanding that “rationality” involves guesswork, inductive approximation rather than deduction, and that the action of other systems too will only approximate “rationality”.

So there will always be a certain degree of randomness in the environment, because systems are always making partly-wrong guesses about the rational principles other systems are following. Also the ability of individual systems to compute actions based on the given data may be very weak, to begin with. But over time, as individual systems learn to make better predictions and use them more effectively to generate actions, the environment will become more predictable.

That’s because systems that can’t guess successfully eventually cease to function, and aspects of the environment that don’t tend to become more predictable become destructive, undermining the possibility of rational inference.


Now it’s not clear at this point whether there’s anything in fundamental physics that can operate like this, as an evolving inference-system. But whether or not this picture of interactive learning-systems turns out to be relevant to physics, it’s very interesting in itself... and if a mathematical model is feasible, I would think it would be relevant in quite a few other areas where learning-systems certainly exist.

My guess is that this model assumes too much to work as a basis for physics. Not that it assumes more than many other theories – Smolin’s CNS as one example. But as I suggested in the OP, I think we need to question some of the key things we take for granted in physics. So I would not want just to assume that physical systems can store indexed data-sets over time, and perform mathematical operations on them.

Even more basically, I don’t want just to assume that the “actions” of one system can produce “distinguishable events” for other systems. I think the fact that there are distinct systems that communicate information to each other is the main thing we need to understand about the physical world and how it works – given that information is only “distinguishable” (or “determinate”) in a context of other information, communicated through other channels by other kinds of systems.

But apart from my own peculiar notions, I get where you’re coming from. I’m glad someone takes seriously that there’s a functional dimension to physics, and that we can try to understand it in terms of the requirements of an evolutionary process. And the method of your madness seems to be similar to mine – i.e. to trust a certain basic intuition about how things work in the world, and try to find language – in your case mathematics – to make it more concrete.
 
  • #50
Now I think your summary of the main idea is right in tune with what I mean!
ConradDJ said:
But whether or not this picture of interactive learning-systems turns out to be relevant to physics, it’s very interesting in itself... and if a mathematical model is feasible, I would think it would be relevant in quite a few other areas where learning-systems certainly exist.
Excellent point. I'm glad you see this. This is why part of what I'm doing is not just physics, it's more than that. It's rather a new framework for theory building and inference in science, but my main focus lies on physics. Although there are very clear parallells between other fields. This does not only allow utilising this idea in other fields, it also means that intuition and understanding of other fields can HELP develop our understanding of physics.

The deeper aspect of this, which underlies also my choice of analysis is that in order to understand a scientific theory in the deepest way, one has to understand the scientific process, and function a theory has.

Here many physicists seem to have a very superficial view. This is much more complicated than just falsification and corroboration. Unfortunately a popular opinon here, where Popper joins in, is that these "problems" are not scientific problems, they are largley problems of psychology of the human brain. I am convinced that that is a mostly confused position. When I read poppers book some years ago it struck me as the work of someone what seeks a way to deny the confusing but true nature of inference.

Somehow who does seems to understand this extended application to social theory is Smolins "side-kick" in his evolving law contexts - Roberto Unger :) I think Unger understands his better than Smolin, at least that is my impression from listening to some of the Perimeter talks of them both. So from my perspective, I think R.Unger has had good influence on Smolin. Unger may not be a physicist, but he seems clever.
ConradDJ said:
Now it’s not clear at this point whether there’s anything in fundamental physics that can operate like this, as an evolving inference-system.

You have a point here. If it was really CLEAR, then everybody would work on this, but they don't.

I can just speak for myself and all my understanding an intuition tells me that this makes perfect sense even for physics. It's how I understand QM for example, except of course that "my understanding" implies that QM as it currently stands can not be fundamental.

Without this, I have to admit that QM would be very confusing for me. I would even say it was my attempts at understanding physics, including QM and the issues With GR and infinities that has lead my to this position.

So for ME, this interacting and evolving inference-system stuff has EVERYTHING to do with physics.

But you are still right that from the somewhat "mainstream" or objective scientific perspective of today this is NOT clear.

It's about as unclear as ST is to me. ie. what does these wriggling strings has to do with physics?

But this will remain unclear until somehow shows how clear it is, and what this can do. Also the connection between physics and science, and scientific process which is generally agreed to be an inference process, is I think undoubtful. Therefor my suggestion seems logical and rational, and in this sense I think it's LESS unclear than say ST philosophy, where it's a total ad hoc trick.

What I'm suggesting here is not ad hoc tricks. It's more a general appeal to analyser the process whereby theories and laws come to be, in a more proper way than say Popper did.

ConradDJ said:
So I would not want just to assume that physical systems can store indexed data-sets over time, and perform mathematical operations on them.

Even more basically, I don’t want just to assume that the “actions” of one system can produce “distinguishable events” for other systems. I think the fact that there are distinct systems that communicate information to each other is the main thing we need to understand about the physical world and how it works – given that information is only “distinguishable” (or “determinate”) in a context of other information, communicated through other channels by other kinds of systems.

I've actually give my assumptions quite some thought. They aren'y ad hoc starting points. After all my vision is not just mental understanding, my vision is a mathematical model for physics that allows predictions and computation of statistics etc. I have really tried to find the simplest possible constructive starting points. I could give some further elaborations and details beyond the main ideas we've alreadyt discussed, but instead of that maybe I can firsst ask:

Given the basic idea here - what starting points would you chose? After all, I assume that the result we need is something that makes a difference? And without a quantiative predictions (=mathematics) what difference do we make?

The human brain, can understand this without mathematics. Because even a humanist frame does all this magic. But to translate this "insight" into a mathematical model, is the difficult thing, but this is what we need to do. So in some way, we need to start putting mathematical clothes on these ideas...I've been thinking about this a lot and what I'm now trying to do is simply the simplest way I could come up with. But if there is a better way I'm open for that.

/Fredrik
 

Similar threads

Replies
2
Views
1K
Replies
5
Views
3K
Replies
1
Views
1K
Replies
3
Views
563
Replies
2
Views
213
Replies
4
Views
1K
Replies
6
Views
3K
Replies
18
Views
3K
Back
Top