Emergence of Complexity and Life

In summary: I can certainly see how, given the existence of some complex system (such as life), it can then expend energy in order to decrease its own internal entropy. But I had the inuition that there could be some thermodynamic explanation as to how it got into that state in the first place, i.e., that the system has to/is likely to self-organise into something complex given the thermodynamic/energetic conditions it is under.The closest example I can think of is Karl Friston's Free Energy Principle https://en.wikipedia.org/wiki/Free_energy_principle.But I had the inuition that there could be some thermodynamic explanation as to how it got into
  • #1
madness
815
70
Life, on first glance, appears to violate the second law of thermodynamics. This is because we see an apparent increase in complexity over time, i.e. a decrease of entropy. The resolution of this apparent violation is supposed to be that the sun provides enough energy and expends enough entropy to offset the decrease in entropy here on Earth. But do we have a theoretical explanation of why complexity is increasing on Earth at all? Can we state the conditions under which a system will be driven towards increasing complexity, potentially leading to the emergence of life? The fact that shining heat and light onto a rotating mass of solid and gas leads to this steady increase in complexity still feels quite surprising to me.
 
  • Like
Likes Lnewqban
Physics news on Phys.org
  • #2
Is complexity a parameter in any relevant scientific field? It would be that field that would provide the "theoretical explanation of why complexity is increasing".
 
  • #3
Dale said:
Is complexity a parameter in any relevant scientific field? It would be that field that would provide the "theoretical explanation of why complexity is increasing".

Complexity is a parameter in many fields but the meaning varies. It was Schrodinger who first (to my knowledge) proposed a link between complexity, entropy, and life in his book "What is Life?". Unfortunately complexity as a technical term can take on different meanings depending on the context/field. I believe Schrodinger used complexity and negentropy interchangeably, but there is also Kolmogorov complexity, integrated information, complex systems theory, and more. To make life easy, we could limit the discussion to negentropy, although my feeling is that you could potentially have systems with high negentropy but low "complexity" (by some other definition that may be more appropriate to life).
 
  • #4
madness said:
Complexity is a parameter in many fields but the meaning varies
So which is the relevant one for your question? Which one describes the complexity of biological systems?
 
  • #5
Dale said:
So which is the relevant one for your question? Which one describes the complexity of biological systems?
Read the rest of my post :rolleyes:
 
  • #6
madness said:
Read the rest of my post :rolleyes:
I read it. You mentioned several. Which one applies to biological systems?
 
  • #7
Dale said:
I read it. You mentioned several. Which one applies to biological systems?

I thought I was clear. The "best" definition isn't clear but we can go with negentropy following Schrodinger as a working definition.
 
  • #8
madness said:
I thought I was clear. The "best" definition isn't clear but we can go with negentropy following Schrodinger as a working definition.
Thanks for the clarification
 
  • #9
madness said:
Life, on first glance, appears to violate the second law of thermodynamics.
That is a misinterpretation of the 2nd Law. Life did not evolve in an isolated system. Earth receives energy from an external source, The Sun. The phrase isolated system is critical to understanding of the 2nd Law.
 
  • Like
Likes pinball1970, 256bits and BillTre
  • #10
madness said:
The resolution of this apparent violation is supposed to be that the sun provides enough energy and expends enough entropy to offset the decrease in entropy here on Earth.

That is not correct- living systems use metabolic processes to *decrease* their own entropy at the expense of *increasing* the external entropy.
 
  • Like
Likes cyboman, pinball1970, 256bits and 1 other person
  • #11
anorlunda said:
That is a misinterpretation of the 2nd Law. Life did not evolve in an isolated system. Earth receives energy from an external source, The Sun. The phrase isolated system is critical to understanding of the 2nd Law.

Yes I believe I made that clear in my OP.

Andy Resnick said:
That is not correct- living systems use metabolic processes to *decrease* their own entropy at the expense of *increasing* the external entropy.

Ok. So what are the thermodynamic conditions under which a system will tend to decrease it's own entropy at the expense of increasing internal entropy?
 
  • #12
madness said:
Yes I believe I made that clear in my OP.
Ok. So what are the thermodynamic conditions under which a system will tend to decrease it's own entropy at the expense of increasing internal entropy?

There are many: try starting with 'chemiosmotic hypothesis '
 
  • #13
Andy Resnick said:
There are many: try starting with 'chemiosmotic hypothesis '

I can certainly see how, given the existence of some complex system (such as life), it can then expend energy in order to decrease its own internal entropy. But I had the inuition that there could be some thermodynamic explanation as to how it got into that state in the first place, i.e., that the system has to/is likely to self-organise into something complex given the thermodynamic/energetic conditions it is under.

The closest example I can think of is Karl Friston's Free Energy Principle https://en.wikipedia.org/wiki/Free_energy_principle.
 
Last edited:
  • #15
madness said:
Can we state the conditions under which a system will be driven towards increasing complexity, potentially leading to the emergence of life?
Although it is not a formal part of thermodynamics, I read somewhere a long time ago (I think perhaps more than sixty years when I was an undergraduate) a discussion of what became informally called the 4th law of thermodynamics. (This is not to be confused with what is now apparently called the zeroth law.) The discussion was about an experiment (which as I remember it) took place in Italy. A collection of various chemicals (it may have been all organics, but my memory about this is vague) were in an environment of active stirring and warming. After a while the mixture had quite a few examples of more complex molecules which were previously absent. The mixture had reduced its entropy as free energy was added externally.

This fourth law (as I remember it) was phrased something like the following.
Given the possibility of complexity increasing within a system, the increase in complexity will always take place given adequate, but not too much, free energy.​
 
Last edited:
  • #16
Medical Student: Dr. "Fronkenschteen" isn't it true that Darwin preserved a piece of vermicelli in a glass case until, by some extrordinary means, it actually began to move with voluntary motion?
Dr. Frankenstein: Are you speaking of the worm or the spaghetti?
 
  • Like
Likes cyboman and pinball1970
  • #17
Jeremy England at MIT has probably done some of the most important recent work on this. Here’s one of his articles that kind of started the gold rush in this field:
https://aip.scitation.org/doi/full/10.1063/1.4818538

Edit: apparently he’s not at MIT anymore. He’s at Glaxo.
 
  • Like
Likes BillTre, kith and Andy Resnick
  • #18
For the kind of complexity that is relevant to life and its origin, it is clear that Shannon information (negative Shannon entropy) is the wrong measure. The state of minimum entropy, a perfect crystal near absolute zero, is dead because it has no variety. The state of maximum entropy, a hot gas, is dead because it has no structure. Life exists on the boundary between too much order and too much chaos.

Fisher information is more relevant, as it is dimensioned and hence physically-coupled, and tends to arise any time there is a probability distribution of some physical quantity. In Quantum Mechanics, for example, the kinetic energy of a stationary solution to the Schrödinger equation is linearly proportional to the Fisher information about its position, and something very close to the Heisenberg uncertainty relation can be trivially derived from the Cramer-Rao inequality. The FI is high whenever there are sharp changes in probability density, so it is definitely applicable to things like pattern generation (changes over space like leopard spots and zebra stripes, but also changes over time like turning genes on and off) and concentration gradients. (See B. Roy Frieden, Science from Fisher Information for a sense of its universality.)

Jeremy England's equations are obviously correct; the question is whether they are saying something trivial or profound. I look forward to experimental results.

This is a hard question, or somebody would have solved it already. We have to look at all the options. Kolmogorov complexity clearly has value after things get digital. Even the "Specified Complexity" of Bill Dembski's intelligent design theory had some interesting ideas (although his derivation is hopelessly flawed; see my analysis here). Philosophical concepts of "meaning" (the topic that Shannon explicitly avoided) may have some relevance; one approach is to say that a message has meaning if it changes the state of the receiver.
 
  • Like
Likes cyboman, BillTre and Dale
  • #19
H_A_Landman said:
For the kind of complexity that is relevant to life and its origin, it is clear that Shannon information (negative Shannon entropy) is the wrong measure. The state of minimum entropy, a perfect crystal near absolute zero, is dead because it has no variety. The state of maximum entropy, a hot gas, is dead because it has no structure. Life exists on the boundary between too much order and too much chaos.

Fisher information is more relevant, as it is dimensioned and hence physically-coupled, and tends to arise any time there is a probability distribution of some physical quantity. In Quantum Mechanics, for example, the kinetic energy of a stationary solution to the Schrödinger equation is linearly proportional to the Fisher information about its position, and something very close to the Heisenberg uncertainty relation can be trivially derived from the Cramer-Rao inequality. The FI is high whenever there are sharp changes in probability density, so it is definitely applicable to things like pattern generation (changes over space like leopard spots and zebra stripes, but also changes over time like turning genes on and off) and concentration gradients. (See B. Roy Frieden, Science from Fisher Information for a sense of its universality.)

Jeremy England's equations are obviously correct; the question is whether they are saying something trivial or profound. I look forward to experimental results.

Interesting. I work in computational neuroscience where we regularly use Fisher Information. I'm not convinced that it's the correct measure in this case. You are most likely correct that Shannon entropy isn't either - Schrodinger wrote that he really meant "Free Energy" when he used the term "negentropy" (https://en.wikipedia.org/wiki/Entropy_and_life#Negative_entropy). Karl Friston came up with the Free Energy Principle (https://en.wikipedia.org/wiki/Free_energy_principle) which uses a slightly different type of Free Energy than Schrodinger (its more to do with statistics and information theory). His theory is a bit like the Good Regulator Theorem from cybernetics if you have heard of that (https://en.wikipedia.org/wiki/Good_regulator). Fisher information is an unusual measure as it is local and therefore depends on the value of the parameter being estimated rather than the full probability distribution. In any case Fisher Information is a first order approximation to the Kullback-Leibler divergence which in turn is what the Free Energy of Friston and others is based on.

H_A_Landman said:
This is a hard question, or somebody would have solved it already. We have to look at all the options. Kolmogorov complexity clearly has value after things get digital. Even the "Specified Complexity" of Bill Dembski's intelligent design theory had some interesting ideas (although his derivation is hopelessly flawed; see my analysis here). Philosophical concepts of "meaning" (the topic that Shannon explicitly avoided) may have some relevance; one approach is to say that a message has meaning if it changes the state of the receiver.

Interesting. I feel like there has to be more that can be said in this area and yet since Shannon's Information theory I don't feel like there have really been and revolutionary advances (maybe Kolmogorov's complexity theory could count, but it hasn't had a huge amount of impact as it is too hard to compute in practice).
 
  • #21
Dale said:
So it seems we are back to post 4.

I've consistently said negentropy throughout, which is the same as Free Energy. Nothing has changed.
 
  • #22
madness said:
Interesting. I work in computational neuroscience where we regularly use Fisher Information. I'm not convinced that it's the correct measure in this case.

Well, I didn't claim that it was the solution to everything, only that it had some applicability.

madness said:
(maybe Kolmogorov's complexity theory could count, but it hasn't had a huge amount of impact as it is too hard to compute in practice).

Nod, but every lossless compression algorithm gives an upper bound, and we know how to compute those. There are also hybrid measures where the complexity is a weighted sum of the size of the program generating the output plus the amount of computation required; that biases things towards relatively simple decompression algorithms.

madness said:
Karl Friston came up with the Free Energy Principle (https://en.wikipedia.org/wiki/Free_energy_principle) which uses a slightly different type of Free Energy than Schrodinger (its more to do with statistics and information theory). His theory is a bit like the Good Regulator Theorem from cybernetics if you have heard of that (https://en.wikipedia.org/wiki/Good_regulator). Fisher information is an unusual measure as it is local and therefore depends on the value of the parameter being estimated rather than the full probability distribution. In any case Fisher Information is a first order approximation to the Kullback-Leibler divergence which in turn is what the Free Energy of Friston and others is based on.

Thanks for the references. I had not heard of Friston before, but my wife (a cognitive neuroscientist doing fMRI) most certainly had.

Frieden says that the relation between FI and K-LD (a.k.a. "cross-entropy") is that FI "is proportional to the cross-entropy between the PDF p(x) and a reference PDF that is its shifted version p(x + x)." That makes intuitive sense to me, because sharp transitions would make that cross-entropy large.

Since it is possible to derive relativistic QM (incl. Dirac eqn) from FI (see Frieden chapter 4), I wonder what you would get if you derived QM from K-LD?
 
  • #23
madness said:
This is because we see an apparent increase in complexity over time, i.e. a decrease of entropy
I ask the OP to play a few dozen games of John Conway's cellular automata Game of Life.
Now consider how difficult it is to describe the complexity. How does all of the schmutz in these evolutions arise from such a simple system? Seems very unlikely but yet it happens over and over again in many different ways.
`I am nowhere near clever enough to even understand what is not understood. I don't even know what questions to ask...but I think entropy and energy are not sufficient.
 
  • Like
Likes BillTre and madness
  • #24
hutchphd said:
I ask the OP to play a few dozen games of John Conway's cellular automata Game of Life.
Now consider how difficult it is to describe the complexity. How does all of the schmutz in these evolutions arise from such a simple system? Seems very unlikely but yet it happens over and over again in many different ways.
`I am nowhere near clever enough to even understand what is not understood. I don't even know what questions to ask...but I think entropy and energy are not sufficient.

I agree with your sentiment. However, my question was not whether we can understand the details of the complexity, but rather to characterise the situations in which complexity would or wouldn't emerge. It does seem to depend on the particular system. For example, an ideal gas will not form a complex system regardless of the energy and entropy involed, whereas Conway's Game of Life seems well-equipped to produce complexity. Nonetheless, I can't help but feel that we are missing something important, and that there could be some deeper principles involved that we haven't grasped.
 
  • #25
One has to be cautious using things like Life or the Mandelbrot set as models. Their Kolmogorov complexity never grows; it is always no larger than that of the initial conditions/equations. The idea that simple iterated rules can generate large apparent complexity is worth noting, but those models have neither energy flow nor a need to respond to changes in the environment, and so have little relevance to living things.
 
  • #26
H_A_Landman said:
Thanks for the references. I had not heard of Friston before, but my wife (a cognitive neuroscientist doing fMRI) most certainly had.

Frieden says that the relation between FI and K-LD (a.k.a. "cross-entropy") is that FI "is proportional to the cross-entropy between the PDF p(x) and a reference PDF that is its shifted version p(x + x)." That makes intuitive sense to me, because sharp transitions would make that cross-entropy large.

Since it is possible to derive relativistic QM (incl. Dirac eqn) from FI (see Frieden chapter 4), I wonder what you would get if you derived QM from K-LD?

This all sounds very interesting. I hadn't heard of using FI to derive physics. KL-divergence isn't quite cross-entropy, although they are closely related (https://tdhopper.com/blog/cross-entropy-and-kl-divergence). Fisher information is the curvature of the KL-divergence (https://en.wikipedia.org/wiki/Kullback–Leibler_divergence#Fisher_information_metric). If you are interested, look up information geometry, where they use Fisher information as a metric in a differential geometry formulation of information theory.
 
  • #27
H_A_Landman said:
The idea that simple iterated rules can generate large apparent complexity is worth noting, but those models have neither energy flow nor a need to respond to changes in the environment, and so have little relevance to living things
I think the conclusion is overstated.
Perhaps "and so cannot comprehensively describe living systems" is a little less categorical and more nearly correct
 
  • Like
Likes 256bits
  • #28
  • Like
Likes madness
  • #29
madness said:
The fact that shining heat and light onto a rotating mass of solid and gas leads to this steady increase in complexity still feels quite surprising to me.

It is not just energy from the sun that powers living organisms.
Chemical energy, found in the environment can provide power independent of light from the sun.
One example is alkaline hydrothermal vents, where sea water reacts with new ocean floor rock and then rises to contact the unaltered seawater. The difference in the two solutions produces a difference in redox potnetial which has been hypothesized to power pre-life organic syntheses.

There are many kinds prokaryotes (bacteria and archaea) that derive power from minerals and a chemical redox partner (which are found in particular environments).

madness said:
Can we state the conditions under which a system will be driven towards increasing complexity, potentially leading to the emergence of life?

Many think conditions for life to arise would (in a general way) be based upon Prigogine's dissipative structures:
Prigogine is best known for his definition of dissipative structures and their role in thermodynamic systems far from equilibrium, a discovery that won him the Nobel Prize in Chemistry in 1977. In summary, Ilya Prigogine discovered that importation and dissipation of energy into chemical systems could result in the emergence of new structures (hence dissipative structures) due to internal self reorganization.[18]
If the energy difference is too large or too small, dissipative structures won't form. They form at a medium level of potential.

Discussing Life As We Know It (Life On Earth):

Dissipative structures may form in particular environments where a driver (like two complementary solutions that could from a productive redox pair) exists.
Environments like this could be considered nursery environments, where opportunities exist for easy harvesting of environmental energy, by a simple supra-molecular device.
Thus, the proper dissipative structure could produce organic molecules and become a center for subsequently generating more complexity.

As more (organic) molecules are produced in a local area, they will increase the different possibilities for novel interactions between different molecules (generating more and different organic molecules).
It has the potential to become a vicious cycle of generating chemical diversity within a structurally organized entity (derived from a dissipative structure).

Something I have found limiting in the traditional information approaches (at least using Shannon Information) to the issue of biological complexity is the complete lack of any link to the meaning of any particular chemical/super molecular structures that might be generated.

In the real world of biology (largely composed of interacting molecules in complex structures), different molecular components each have a function (or more than one) in keeping their higher scale enveloping entity reproducing, as a well adapted reproducing entity should.
What the component does, with respect to its enveloping entity that gets selected (has to reproduce in someway), is where its meaning lies.
New meanings for componets can be found among novel oppositions (combinations) among the newly created molecules within the entity or in features of the environment generated as byproducts of the proto-living entities.
Such meaning would also depend the particular features of its environment (from which energy is harvested in some way), and the functional details of the enveloping and reproducing entity of which it is a component.

These ideas are largely derived from those of in Hidalgo's Why Information Grows book. Its about economics, but can also apply to biology.
Here is a thread I posted on that book.
TeethWhitener said:
Jeremy England at MIT has probably done some of the most important recent work on this. Here’s one of his articles that kind of started the gold rush in this field:
https://aip.scitation.org/doi/full/10.1063/1.4818538

Jeremy England was recently on Sean Carrol's podcast.
 
  • Like
Likes madness and hutchphd
  • #30
I do not see any problem with increasing complexity and the 2nd Law. Every day I see the results of metabolic processes as we respire and excrete. I see the results of human activity as we increase our social and economic complexities with the detritus of that activity.

The formation of crystals appears to violate the 2nd Law but it does not.
 
  • #31
Just came across this, which seems very relevant. He's discussing KL-divergence, Free Energy, Fisher Information, Information Geometry, etc. in the context of biology:

 
Last edited:
  • #32
hutchphd said:
I think the conclusion is overstated.
Perhaps "and so cannot comprehensively describe living systems" is a little less categorical and more nearly correct

I meant something more like "have little relevance for a complexity measure of living things". It's clear that organisms use/contain many fractal-like structures, so there is definitely relevance in ontogeny / developmental biology. But e.g. the final number of branches in a vascular system is mostly a function of how many times the "branching rule" got applied, which is not the same thing at all as how complex the rule itself is. It's the latter complexity that we're after. (There is some complexity in the "counter", but it's probably logarithmic in the number of iterations, so it's small and for some purposes we can ignore it.)
 
  • #33
hutchphd said:
I ask the OP to play a few dozen games of John Conway's cellular automata Game of Life.
Now consider how difficult it is to describe the complexity. How does all of the schmutz in these evolutions arise from such a simple system? Seems very unlikely but yet it happens over and over again in many different ways.
`I am nowhere near clever enough to even understand what is not understood. I don't even know what questions to ask...but I think entropy and energy are not sufficient.
If we are to consider Conway's argument, it is that the reason the game of life can produce so much complexity and variety, is that it is so simple.
 
  • #34
H_A_Landman said:
One has to be cautious using things like Life or the Mandelbrot set as models. Their Kolmogorov complexity never grows; it is always no larger than that of the initial conditions/equations. The idea that simple iterated rules can generate large apparent complexity is worth noting, but those models have neither energy flow nor a need to respond to changes in the environment, and so have little relevance to living things.
That does not seem true, that the Kolmogorov complexity doesn't change in the game of life. It seems easy to come up with counter examples. And in some sense it does seem to respond to changes in the environment doesn't it? Can you explain these arguments further?

Also, Kolmogorov complexity is only part of the picture (what is the shortest possible program to produce the objects) another part is something like logical depth (what is the minimum possible number of steps needed to produce the object). Kolmogorov complexity is akin to the amount of unique pieces needed to produce the object, while logical depth is akin to how complicated it is to assemble.

I think that in studying the complexity of life, something like logical depth is important. But that also depends on the system that computes it.
 
Last edited:
  • #35
Chaos is the law of nature. The real question is how is there any order in this nature at all. How do the most complicated system in our universe have laws that govern them? Where do those laws come from? It’s the watch maker theory. The fact that life on Earth exist at all is astronomical.
 
  • Sad
Likes weirdoguy
<h2>1. What is emergence of complexity and life?</h2><p>The emergence of complexity and life refers to the process by which simple and basic components come together to form complex and organized systems that exhibit characteristics of living organisms. This process is believed to have occurred over billions of years on Earth, leading to the development of life as we know it.</p><h2>2. How did complexity and life emerge on Earth?</h2><p>The exact mechanisms of how complexity and life emerged on Earth are still being studied and debated by scientists. However, it is believed that the early Earth was a hot and hostile environment, but over time, simple molecules such as amino acids and nucleotides formed and eventually combined to create more complex molecules like proteins and DNA. These molecules then evolved and formed the building blocks of living organisms.</p><h2>3. What are some examples of emergent complexity in living organisms?</h2><p>Some examples of emergent complexity in living organisms include the development of multicellularity, the emergence of consciousness and self-awareness, and the evolution of complex social behaviors and systems. These emergent properties are not present in individual cells or components, but arise from the interactions and organization of these components.</p><h2>4. Can emergence of complexity and life occur on other planets?</h2><p>It is possible that the emergence of complexity and life could occur on other planets, but it is difficult to say for certain without further exploration and research. The conditions and processes that led to the emergence of life on Earth may be unique, but it is also possible that similar processes could occur on other planets with suitable conditions.</p><h2>5. How does the study of emergence of complexity and life benefit us?</h2><p>Studying the emergence of complexity and life can help us better understand the origins of life and the processes that have shaped the development of living organisms on Earth. This knowledge can also have practical applications, such as in the development of new technologies and medicines, and in addressing issues related to sustainability and the environment.</p>

1. What is emergence of complexity and life?

The emergence of complexity and life refers to the process by which simple and basic components come together to form complex and organized systems that exhibit characteristics of living organisms. This process is believed to have occurred over billions of years on Earth, leading to the development of life as we know it.

2. How did complexity and life emerge on Earth?

The exact mechanisms of how complexity and life emerged on Earth are still being studied and debated by scientists. However, it is believed that the early Earth was a hot and hostile environment, but over time, simple molecules such as amino acids and nucleotides formed and eventually combined to create more complex molecules like proteins and DNA. These molecules then evolved and formed the building blocks of living organisms.

3. What are some examples of emergent complexity in living organisms?

Some examples of emergent complexity in living organisms include the development of multicellularity, the emergence of consciousness and self-awareness, and the evolution of complex social behaviors and systems. These emergent properties are not present in individual cells or components, but arise from the interactions and organization of these components.

4. Can emergence of complexity and life occur on other planets?

It is possible that the emergence of complexity and life could occur on other planets, but it is difficult to say for certain without further exploration and research. The conditions and processes that led to the emergence of life on Earth may be unique, but it is also possible that similar processes could occur on other planets with suitable conditions.

5. How does the study of emergence of complexity and life benefit us?

Studying the emergence of complexity and life can help us better understand the origins of life and the processes that have shaped the development of living organisms on Earth. This knowledge can also have practical applications, such as in the development of new technologies and medicines, and in addressing issues related to sustainability and the environment.

Similar threads

  • Biology and Medical
Replies
14
Views
2K
  • Special and General Relativity
Replies
9
Views
2K
Replies
6
Views
4K
  • Other Physics Topics
Replies
8
Views
1K
  • Introductory Physics Homework Help
Replies
3
Views
680
  • General Discussion
Replies
3
Views
933
  • Thermodynamics
Replies
8
Views
873
  • Thermodynamics
Replies
1
Views
2K
  • Thermodynamics
2
Replies
65
Views
10K
  • Astronomy and Astrophysics
Replies
26
Views
4K
Back
Top