# Emergence of Complexity and Life

• I
Thanks for the references. I had not heard of Friston before, but my wife (a cognitive neuroscientist doing fMRI) most certainly had.

Frieden says that the relation between FI and K-LD (a.k.a. "cross-entropy") is that FI "is proportional to the cross-entropy between the PDF p(x) and a reference PDF that is its shifted version p(x + x)." That makes intuitive sense to me, because sharp transitions would make that cross-entropy large.

Since it is possible to derive relativistic QM (incl. Dirac eqn) from FI (see Frieden chapter 4), I wonder what you would get if you derived QM from K-LD?
This all sounds very interesting. I hadn't heard of using FI to derive physics. KL-divergence isn't quite cross-entropy, although they are closely related (https://tdhopper.com/blog/cross-entropy-and-kl-divergence). Fisher information is the curvature of the KL-divergence (https://en.wikipedia.org/wiki/Kullback–Leibler_divergence#Fisher_information_metric). If you are interested, look up information geometry, where they use Fisher information as a metric in a differential geometry formulation of information theory.

hutchphd
The idea that simple iterated rules can generate large apparent complexity is worth noting, but those models have neither energy flow nor a need to respond to changes in the environment, and so have little relevance to living things
I think the conclusion is overstated.
Perhaps "and so cannot comprehensively describe living systems" is a little less categorical and more nearly correct

256bits
TeethWhitener
Gold Member
BillTre
Gold Member
The fact that shining heat and light onto a rotating mass of solid and gas leads to this steady increase in complexity still feels quite surprising to me.
It is not just energy from the sun that powers living organisms.
Chemical energy, found in the environment can provide power independent of light from the sun.
One example is alkaline hydrothermal vents, where sea water reacts with new ocean floor rock and then rises to contact the unaltered seawater. The difference in the two solutions produces a difference in redox potnetial which has been hypothesized to power pre-life organic syntheses.

There are many kinds prokaryotes (bacteria and archaea) that derive power from minerals and a chemical redox partner (which are found in particular environments).

Can we state the conditions under which a system will be driven towards increasing complexity, potentially leading to the emergence of life?
Many think conditions for life to arise would (in a general way) be based upon Prigogine's dissipative structures:
Prigogine is best known for his definition of dissipative structures and their role in thermodynamic systems far from equilibrium, a discovery that won him the Nobel Prize in Chemistry in 1977. In summary, Ilya Prigogine discovered that importation and dissipation of energy into chemical systems could result in the emergence of new structures (hence dissipative structures) due to internal self reorganization.[18]
If the energy difference is too large or too small, dissipative structures won't form. They form at a medium level of potential.

Discussing Life As We Know It (Life On Earth):

Dissipative structures may form in particular environments where a driver (like two complementary solutions that could from a productive redox pair) exists.
Environments like this could be considered nursery environments, where opportunities exist for easy harvesting of environmental energy, by a simple supra-molecular device.
Thus, the proper dissipative structure could produce organic molecules and become a center for subsequently generating more complexity.

As more (organic) molecules are produced in a local area, they will increase the different possibilities for novel interactions between different molecules (generating more and different organic molecules).
It has the potential to become a vicious cycle of generating chemical diversity within a structurally organized entity (derived from a dissipative structure).

Something I have found limiting in the traditional information approaches (at least using Shannon Information) to the issue of biological complexity is the complete lack of any link to the meaning of any particular chemical/super molecular structures that might be generated.

In the real world of biology (largely composed of interacting molecules in complex structures), different molecular components each have a function (or more than one) in keeping their higher scale enveloping entity reproducing, as a well adapted reproducing entity should.
What the component does, with respect to its enveloping entity that gets selected (has to reproduce in someway), is where its meaning lies.
New meanings for componets can be found among novel oppositions (combinations) among the newly created molecules within the entity or in features of the environment generated as byproducts of the proto-living entities.
Such meaning would also depend the particular features of its environment (from which energy is harvested in some way), and the functional details of the enveloping and reproducing entity of which it is a component.

These ideas are largely derived from those of in Hidalgo's Why Information Grows book. Its about economics, but can also apply to biology.
Here is a thread I posted on that book.

Jeremy England at MIT has probably done some of the most important recent work on this. Here’s one of his articles that kind of started the gold rush in this field:
https://aip.scitation.org/doi/full/10.1063/1.4818538
Jeremy England was recently on Sean Carrol's podcast.

gleem
I do not see any problem with increasing complexity and the 2nd Law. Every day I see the results of metabolic processes as we respire and excrete. I see the results of human activity as we increase our social and economic complexities with the detritus of that activity.

The formation of crystals appears to violate the 2nd Law but it does not.

Just came across this, which seems very relevant. He's discussing KL-divergence, Free Energy, Fisher Information, Information Geometry, etc. in the context of biology:

Last edited:
I think the conclusion is overstated.
Perhaps "and so cannot comprehensively describe living systems" is a little less categorical and more nearly correct
I meant something more like "have little relevance for a complexity measure of living things". It's clear that organisms use/contain many fractal-like structures, so there is definitely relevance in ontogeny / developmental biology. But e.g. the final number of branches in a vascular system is mostly a function of how many times the "branching rule" got applied, which is not the same thing at all as how complex the rule itself is. It's the latter complexity that we're after. (There is some complexity in the "counter", but it's probably logarithmic in the number of iterations, so it's small and for some purposes we can ignore it.)

I ask the OP to play a few dozen games of John Conway's cellular automata Game of Life.
Now consider how difficult it is to describe the complexity. How does all of the schmutz in these evolutions arise from such a simple system? Seems very unlikely but yet it happens over and over again in many different ways.
`I am nowhere near clever enough to even understand what is not understood. I don't even know what questions to ask......but I think entropy and energy are not sufficient.
If we are to consider Conway's argument, it is that the reason the game of life can produce so much complexity and variety, is that it is so simple.

One has to be cautious using things like Life or the Mandelbrot set as models. Their Kolmogorov complexity never grows; it is always no larger than that of the initial conditions/equations. The idea that simple iterated rules can generate large apparent complexity is worth noting, but those models have neither energy flow nor a need to respond to changes in the environment, and so have little relevance to living things.
That does not seem true, that the Kolmogorov complexity doesn't change in the game of life. It seems easy to come up with counter examples. And in some sense it does seem to respond to changes in the environment doesn't it? Can you explain these arguments further?

Also, Kolmogorov complexity is only part of the picture (what is the shortest possible program to produce the objects) another part is something like logical depth (what is the minimum possible number of steps needed to produce the object). Kolmogorov complexity is akin to the amount of unique pieces needed to produce the object, while logical depth is akin to how complicated it is to assemble.

I think that in studying the complexity of life, something like logical depth is important. But that also depends on the system that computes it.

Last edited:
Chaos is the law of nature. The real question is how is there any order in this nature at all. How do the most complicated system in our universe have laws that govern them? Where do those laws come from? It’s the watch maker theory. The fact that life on earth exist at all is astronomical.

weirdoguy
Dale
Mentor
Ok, that is enough. 34 posts in and we still are vacillating on the specific meaning of “complexity”. And now this thread is attracting nonsense. This thread is closed.