Computer simulation of an organism

Click For Summary

Discussion Overview

The discussion revolves around the theoretical possibility of creating a computer simulation of an organism based on digitized DNA. It explores the complexities involved in simulating biological systems, including protein folding and the limitations of reductionist approaches in modeling whole organisms.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants suggest that while it is theoretically possible to simulate an organism atom by atom, practical implementation is currently unfeasible due to the complexities involved.
  • There is a consensus that DNA alone does not provide a complete atomic description necessary for simulating an organism, as it requires existing cellular machinery and environmental factors.
  • Participants note that current modeling efforts are often limited to cellular levels, with examples such as C. elegans being highlighted for their simplicity in simulation.
  • Some participants mention that while projects like Folding at Home have made progress in protein folding, the problem of accurately predicting protein configurations from amino acid sequences remains unsolved.
  • There is a discussion about the potential pitfalls of reductionism in modeling, with some arguing that being "too reductionist" can lead to issues like overfitting, while others contend that reductionism is essential for understanding complex systems.
  • Emergent behavior in complex systems is raised as a challenge to reductionist approaches, complicating the ability to model interactions at a granular level.

Areas of Agreement / Disagreement

Participants express differing views on the feasibility of simulating organisms from DNA, with some arguing against the reductionist approach while others defend its necessity. The discussion remains unresolved with multiple competing perspectives on the limitations and possibilities of biological modeling.

Contextual Notes

Limitations include the dependence on existing knowledge about cellular machinery and environmental influences, as well as unresolved issues in protein structure prediction and the challenges posed by emergent behaviors in complex systems.

Geo212
Messages
8
Reaction score
1
Is it possible,at least theoretically, to take digitised DNA and produce a computer simulation of the organism that it came from?
 
Biology news on Phys.org
In theory, you can simulate the whole organism, atom by atom. In practice, this is completely unfeasible.

Note that you would need more than just the DNA to determine what the organism looks like.
 
Ryan_m_b said:
For the moment we can't even model the folding of a single protein
Folding at home does exactly that. It takes time and some manual tuning, but it is possible.
 
  • Like
Likes   Reactions: atyy
mfb said:
Folding at home does exactly that. It takes time and some manual tuning, but it is possible.

True, what I meant was we can't reliably plug in the amino-acid sequence of any odd protein and have an accurate final structure come out. Folding at home and similar projects do have some success, but as far as I'm aware the problem of predicting final protein configuration is still considered to be unsolved. Perhaps I'm out of date with regards to progress in this field.
 
Geo212 said:
Is it possible,at least theoretically, to take digitised DNA and produce a computer simulation of the organism that it came from?

Trying to model *whole organisms from a molecular point of view is, as mfb, implied, absurdly reductionist. Current attempts at simulation are generally at the cellulalr level. C. elegans has a lot of attention in this regard. You can find numerous C. elegans simulations on google that will give you an idea of what level (very simple) whole organisms are modeled at. The more complex the organism, the higher levels of abstraction you need to keep generalizations appropriate. When you get too reductionist, you get problems like "over fitting".

*edit: for clarification
 
Last edited:
Research groups have modeled made efforts to computationally model and simulate the metabolism of a small bacterium, and these models have had some success in predicting the effects of mutations on the phenotype of the bacteria. We discussed such studies in this Physics Forums thread from a few years ago.

Note that the paper that I reported on in that thread is not doing any de novo modeling. They start with knowledge about the enzymes encoded by the genome and are guided by empirical data on the rates of those enzymes. As many have noted, protein structure prediction is not yet to the point where we can predict parameters such as catalytic rates starting only from a DNA sequence.
 
Geo212 said:
Is it possible,at least theoretically, to take digitised DNA and produce a computer simulation of the organism that it came from?

No.

The reason is that DNA isn't an atomic description of "what goes here and what goes there". It is a recipe for controling and maintaining an already functioning organism, and it relies on a preexisting cellular machinery (from the ovum) and an environment that directs development from cellular levels and up.

If you already can model the rest of the organism from a subcellular level, sure. Then the DNA (or at least its genome) adds the missing functions (as described above).

Pythagorean said:
When you get too reductionist, you get problems like "over fitting".

I guess I don't understand this. As I understand it you simply can't be "too reductionist".

Rather, due to emergent behavior it becomes practically impossible to pick apart some systems. For an example:

"This paper illustrates the use of the nonparametric Wald-Wolfowitz test to detect stationarity and ergodicity in agent-based models. A nonparametric test is needed due to the practical impossibility to understand how the random component influences the emergent properties of the model in many agent-based models."

But note that the test is "reductionist", i.e. informed of the system state with respect to the studied behavior:

"Knowledge of the basic properties of the artificial time series is essential to reach a correct interpretation of the information which can be extracted from the model. To acquire such knowledge it is important to perform statistical testing of the properties of the artificial data (Leombruni and Richiardi 2005; Richiardi et al. 2006). Supposing that an agent-based model has a statistical equilibrium, defined as a state where some relevant statistics of the system are stationary (Richiardi et al. 2006) the stationarity test can help in detecting it."

[ http://jasss.soc.surrey.ac.uk/15/2/7.html ]

Other problems with systems are stuff like degeneracy. But that is part of the physics studied.
 
Torbjorn_L said:
No.

The reason is that DNA isn't an atomic description of "what goes here and what goes there". It is a recipe for controling and maintaining an already functioning organism, and it relies on a preexisting cellular machinery (from the ovum) and an environment that directs development from cellular levels and up.

If you already can model the rest of the organism from a subcellular level, sure. Then the DNA (or at least its genome) adds the missing functions (as described above).
I guess I don't understand this. As I understand it you simply can't be "too reductionist".

Rather, due to emergent behavior it becomes practically impossible to pick apart some systems. For an example:

"This paper illustrates the use of the nonparametric Wald-Wolfowitz test to detect stationarity and ergodicity in agent-based models. A nonparametric test is needed due to the practical impossibility to understand how the random component influences the emergent properties of the model in many agent-based models."

But note that the test is "reductionist", i.e. informed of the system state with respect to the studied behavior:

"Knowledge of the basic properties of the artificial time series is essential to reach a correct interpretation of the information which can be extracted from the model. To acquire such knowledge it is important to perform statistical testing of the properties of the artificial data (Leombruni and Richiardi 2005; Richiardi et al. 2006). Supposing that an agent-based model has a statistical equilibrium, defined as a state where some relevant statistics of the system are stationary (Richiardi et al. 2006) the stationarity test can help in detecting it."

[ http://jasss.soc.surrey.ac.uk/15/2/7.html ]

Other problems with systems are stuff like degeneracy. But that is part of the physics studied.

When it comes to modeling, you can certainly be too reductionist (and, conversely, too general). It's a matter of practicality, not a slight on reductionism:

http://en.wikipedia.org/wiki/Overfitting

The basic idea is that if you're studying something complex (like fluids) you use abstractions like the Reynold's number, pressure, temperature, and other group descriptions, rather than trying to model each particle in the ensemble. Not that the particle view isn't valid, but that it's not practical in a computer simulation.
 
Last edited:
  • #10
Pythagorean said:
When it comes to modeling, you can certainly be too reductionist (and, conversely, too general). It's a matter of practicality, not a slight on reductionism:

http://en.wikipedia.org/wiki/Overfitting

I understand it as that overfitting has nothing to do with practical "reductionism" (a philosophic term) as a system composed of its parts and how to use that to advantage (an actual usage), but is a problem of statistic modeling.

I agree with the rest of course, as it was much of what my comment tried to describe (in a longer format).
 
  • #11
Pythagorean said:
When it comes to modeling, you can certainly be too reductionist (and, conversely, too general). It's a matter of practicality, not a slight on reductionism:

http://en.wikipedia.org/wiki/Overfitting

The basic idea is that if you're studying something complex (like fluids) you use abstractions like the Reynold's number, pressure, temperature, and other group descriptions, rather than trying to model each particle in the ensemble. Not that the particle view isn't valid, but that it's not practical in a computer simulation.

Torbjorn_L said:
I understand it as that overfitting has nothing to do with practical "reductionism" (a philosophic term) as a system composed of its parts and how to use that to advantage (an actual usage), but is a problem of statistic modeling.

I agree with the rest of course, as it was much of what my comment tried to describe (in a longer format).

Yes, perhaps a more relevant concept is the granularity of the model.
 
  • Like
Likes   Reactions: Torbjorn_L and Pythagorean

Similar threads

  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 26 ·
Replies
26
Views
4K
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
5
Views
4K
  • · Replies 11 ·
Replies
11
Views
3K
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 32 ·
2
Replies
32
Views
3K