The title is the name of a Science book review on the following book:

From Strange Simplicity to Complex Familiarity, by Manfred Eigen, published back in April.

The review was published in the current issue of Science, but it's behind a paywall. It's interesting though, if you're at a university you might be able to access it through the library.

I think it says not to optimize at all, really, and it doesn't just say what not to do, it suggests that game-theoretic perspective is a more appropriate conceptual toolbag. This is interesting to me, in particular, because I use genetic algorithms to realize unknown physiological parameters and I've been struggling with the contradiction of searching for a single optimum in a diverse landscape.

More appropriate than what? Isn't "evolutionary game theory" the new name for the same old thing - dynamics. In a particular form of evolutionary dynamics - eg. frequency dependent fitness - it is conceptually analogous to game theory. But it is basic and not new.

More generally, what is the "environment"? In thermodynamics we know the division is arbitrary. So in evolution the "environment" of an organism includes other species. If the environment changes - including the frequency of other species - the fitness of the organism can change.

What's new then? If you look at what's taught in undergraduate biology http://www.life.umd.edu/classroom/zool360/L18-ESS/ess.html, there are the assumptions of infinite populations and interactions between all pairs (dynamics on a fully connected graph). The recent work, including that by Nowak who co-authored the review, is the extension to finite N and graphs that are not fully connected. There are effects that are not obvious even if one knows the result for large N and full connectivity.

Optimization algorithms. Finding a single minimum. Regardless of whether your system is dynamic or not. It's not just about not blindly optimizing, it's about the fundamental conceptual conflict of looking for an optimum in a system with degeneracy and variability. Yet many (in my department) still do use optimization.

I don't know any game theory beyond the prisoner's dilemma, but that's not apparent to me. I think Lotka-Volterra might be an example of something that is dynamics but not game theory, and prisoner's dilemma, there is no state so I don't conceptualize it as dynamical. So, maybe you use a lose definition of dynamical, but I don't see anything close to 1-to-1 there, just some overlap.

Just making sure that basic things are not being propagated as novel. I edited my post #6 to indicate what's new about recent work - the study of the dynamics on finite, structured graphs.

There is a state in all of this, it's truly (stochastic) dynamics. Most of the mathematics is Markov chains.

I don't have a great idea of the of the atmosphere to judge the global novelty of literature. I know that optimization approaches are commonplace in physiological modelling and I go a different route and this was something that articulated my feelings on estimating parameters to make model match observation.

I've heard that when a spine makes transcription requests from the soma (like via TIPS after microtubial invasion or other BDNF pathways) that the contents of the package ( presumably things like psd95 carried by motor proteins) can be hijacked by spines closer to the soma as they pass by. I guess that the paper you mention contributes to the same idea of neural darwinism with respect to synapses. I can imagine a high level experimentalist (like a neuropsychologist or someone looking at systems development) might use this to explain system behavior in networks.

Perhaps it's novel with respect to your application. In those papers they are simply exploring the space of dynamical system models. But you want to use it to fit your model to data?

As far as I know, if the model landscape has many local minima, which is usually the case with nonlinear systems of high dimension, there is no "general solution". The field of "deep neural nets" was stuck for many years for that reason, but around 2006 Hinton and Salakhutdinov found a trick to initialise the parameters so that the final optimzation could be done in decent time and without getting stuck. For models with few parameters and which are analytically tractable or for which you have enough computer power to cover space by brute force, the general hope is to solve for qualitative differences is different parts of parameters space, so that one can get a prediction at least about the general region one is in - when possible, this is the best since it does not assume that only one set of parameters are consistent with the data. I've seen http://www.ncbi.nlm.nih.gov/pubmed/23864375 and http://www.ncbi.nlm.nih.gov/pubmed/23629582 , but IIRC, your models involve voltage-dependent conductances etc, which I'm least familiar with how to fit, and which are usually considered difficult.

Probably not even truly novel in my application; I read the suggestion of using genetic algorithms in a paper written by Wulfram Gerstner reporting on the results of an INCF modelling competition and of course, multiple models comes from Eve Marder's "Multiple Models..." paper (though I couldn't even tell you if her paper was "novel", it's not even really the kind of judgment I care about, I guess; I learned something from it).

Yes, conductance models have a lot of degeneracy, probably because of all the hyperbolic tangent functions (or Boltzmann functions if you like). "Fitting them" is not enough, it's not unique. You also have to see how, for a particular fit, different qualitative/emergent changes arise from small changes in the parameters. So probably some application from perturbation theory will become an important part of analyzing the dynamic "traits" of a particular parameter distribution.