I picked up your book evolution 2.0, basically by pure luck, it was very interesting and well written. And it places all the cards on the table, that is rare, thank you. To my surprise there is also a prize attached to that book.
I think I can show how control-information can appear in our universe. I cannot demonstrate that chemically. Maybe not even by a computer simulation. But I think I can explain the theory well enough, and demonstrate the various aspects of it, that you might be convinced. It will show why it is not patentable nor worth 10 million to a business.
I also wanted to give some feedback on your views and contrast it to some of my thinking. I think "neo darwinists", as you would probably classify me, will disagree when people posit that:
1. Phenotype can imprint on genotype; 2. Useful adaptations happen more than expected by random chance alone.
While I think both are indeed false, they do not imply your evolutionary 1.0 model, because for both statements there are higher order effects that need to be taken into account.
1) Genotype is anything that can transmit information forwards. In other words, some phenotype changes are also genotype changes. But the information carried forward is never greater than the change itself, and will disappear when the same aspect of phenotype was to change again, ie it doesn't leave an imprint.
Example: not being cared for enough during development (like by licking), can tune hormones such that this tuning is transmitted forwards to offspring. Most likely because this animal tends not be less caring for her offspring also, but maybe a more cell-based chemical signal is at play, or maybe both.
Example: generations living around a body of water and swimming a lot, will by no means make it more likely offspring will develop fins. Instead, offspring with better adapted limbs might do better in this environment. This is how eventually something like fins might appear in the population.
2) Perhaps the best way to show this one is to point out the first higher order effect: variation rate. Evolution is self-replication with variation in a shared resource environment. But if we mathematically model just the first two, we have a model where doubling rate is all that matters. The fastest version that appears, will, given unlimited time, always become the most numerous. If after a number of rounds we stop and pick up 100 versions at random, we can say:
1. The faster the version, the more likely it is represented in our sample; 2. These faster versions have a certain, close to optimal, balance between doubling and variation.
That last effect is because versions that never vary, will never produce faster versions. And versions that only ever produce variations, will not have any doubling effect. They will take longer to find versions that do, than the universe has time (depends on implementation details, but its not that relevant).
In general, if there is a tunable parameter, no matter how high level its effect, it will be tuned. For example, if there can be gene-agnostic and gene-aware DNA repair mechanisms, and the latter is advantageous, evolution will make use of it. That opens up new possible mutations, and their relative chances of appearing. In your spam example, evolution 1.0 does not imply only letter aware mutations, but can include word aware, and sentence aware mutation mechanisms.
Lastly, in chapter 25, in the case of information, the goal is negatively formulated: stay around. Entropy will erase any unlikely information, unless it is somehow effective at staying around. That is how the universe gives evolution its goal. But it is an open-ended goal, it is a push away from being erased but not directed towards any specific endpoint.
Yes, genetic algorithms get stuck in local optima, but more complex environments that includes predators will suppress that tendency by disallowing a too narrow optima. (There is a paper on using a GA to evolve sorting algorithms that highlights this, cannot find it right now.)
Also note that most GA environments preclude higher order effects, as the mutations, the replication mechanism, the selection mechanisms, are not simulated, but fixed and provided by the simulation.
Lastly, for a GA, random genotype changes can have dramatic phenotype changes. Just imagine changing a single bit of computer code, it probably leads to errors and crashes (but not for a cell!), but it might flip a boolean expression. Your password no longer logs you in, instead all faulty passwords do; 1 bit flip, major change.
I'll leave it at this for now, thank you for your time, I am looking forward to your responses, so I hope you are interested.
I wonder why no one answered for 8 months! I can tell you something though. Among all the programming paradigms that GA is the most robust and the alteration of a single bit does not crash the system, simply a generation takes a different path from all the others, because the parallelism makes GA more robust than an algorithm simply sequential. The current GA, however, use formal methods of mutation, simulating what happens in reality , but according to no formal rules at all. The real difference is all there: maybe the GA algorithms are the right way but they are implemented in the wrong way. These are just my reflections, but I hope they can be used to light minds. If you find the right algorithm, however, keep it secret, or share it with me :-)
HeroX was a fantastic partner, providing not only a user-friendly online platform but also outstanding expertise and guidance in designing and executing our inaugural Echovation Challenge.
- Robin Wiegerink, CEO of the American Society of Echocardiography.