“I've talked to some GA people and seen some presentations, and I have to say that I think GA researchers should communicate this more clearly. By overselling their research as a dropin replacement for, and improvement upon, all optimization strategies, GA researchers risk losing some credibility.”
To elaborate on the above - I think there’s a tendency in the GA research community (and perhaps this generalizes to researchers who specialize in a particular technique, be it within ML, optimization, or otherwise) to miss out on the bigger picture of where their research fits in with research in the wider field. It’s easy to get caught up researching how and why the algorithm works (and doesn’t work), or in optimizing a GA for an application area/type, and to forget to context in which this work sits - i.e., what types of problems do GAs do a good job at solving. Also, there isn’t enough benchmarking carried out against other techniques - this is maybe one place where we as GA researchers lose credibility.
Having said that, I’m not sure that GP is only suitable for solving dynamic optimization problems. I’ve ended up working on dynamic problems (and how to induce robustness more generally) partly by accident. I’m not the best person to conclude anything that applies generally about the utility of GP/GAs outside of those problem-types (i.e. noisy/dynamic problems). But I agree that the GA research community as a whole should focus on demonstrating results across a variety of problem-types, and show how they stack up against alternative approaches, and move the focus away from evangelizing about the technique.
“To be truly dynamic might mean that selection and evolution continues in the dynamic environment. i.e. that we are not using a single individual, or a static population of individuals, to try to drive our decisions. I take it this is what you mean? “
Yes, that’s what I mean. People use “random immigrants” (randomly generated individuals, introduced into the population, replacing existing individuals) when they detect that the environment has changed. Another approach involves using a constantly updated memory of solutions that have done well in the past (suitability depends on the characterization of the environment).
“You want it to be the case that certain "patterns" in the market data appear again and again, allowing the relevant robust "gene" to become useful again in the future.”
I haven’t looked at this at the level of genes - we don’t really work with genes in GP (the genotype and phenotype are the same in tree-based GP) - the closest analogue is a subtree (isolated solution component). (John Mark’s thesis was largely concerned with modularity - that is - isolating useful subtrees to be used when building solutions, but afaik he didn't look at dynamic environments).
There have been studies that have used a memory of individuals that have performed well in the past. This approach is suitable in environments where there is a high likelihood that conditions similar or identical to those seen in the past, will be seen again in the future (e.g. your Christmas example :-)