Help on Fitness Function for Genetic Algorithm

In summary: So, I think you would need to pick a few Fibonacci numbers and have the fitness function look for sequences that approximate that property.
  • #1
timthereaper
479
33
Hey all, I'm trying to learn how to write genetic algorithms. I've constructed a kind of crude experiment just to see how the algorithm solves:

An array of integers is my "genome". The "ideal genome" is an array of the first ten numbers of the Fibonacci sequence (1,1,2,3,5,8,13,21,34,55). I'm starting with a random population of 10 "organisms" with random genomes (each of the ten numbers are random integers from 0-100) and hopefully the algorithm will yield something close to the ideal.

Now, I don't entirely understand how to choose an appropriate "fitness" function for the selection part of the algorithm. I've read online about some ideas but can't seem to implement them in my experiment. I see several potential problems with bad fitness functions:

1. The population grows and grows because I will choose the best candidate in that generation to reproduce n times and the population size will grow from 10 in the first generation to like 15 in the second, 20 in the third, etc.

2. Choosing only the best 5 candidates and stopping all potential for a more complete solution. This would happen if one gene of all the top 5 candidates was equally far away (or something like that).

I'm sure there are other problems, but like I said, I'm learning. I would appreciate any help with this. I'm a decent programmer in both C++ and Java, so programming really isn't an issue for me.
 
Technology news on Phys.org
  • #2
You do need to limit the population size, and I'd guess to something (much) larger than 10. Choosing a small number of the best candidates is probably reasonable, assuming you mutate them occasionally to cover your search space.

For a metric, how about absolute-value-subtracting your genome from the desired sequence and adding all the results. The desired sequence would have a value of 0 and others would be larger as they diverged.

The point of a GA is to partition the search space and do some kind of sorta-exhaustive search around areas that seem to have high fitness -- this assumes that there is a gradient in these areas. Then once in a while some of your searchers mutate and jump to other partitions. GA's are good for "fitness landscapes" that have multiple peaks -- more than one fairly-good solution. They are not so good at finding the absolute best solution -- in many cases you really need an exhaustive search. And they may not be the best option for a landscape that has only one peak -- especially if it's a narrow one like your series.
 
  • #3
Thanks for the feedback. It gave me a lot to think about. I have a few more questions.

1. In regards to population size, is there a good number you would suggest? Is there a general rule of thumb?

2. So with a small number of candidates, how do you ensure the same size of populations every generation? I was reading about algorithms (http://www.rennard.org/alife/english/gavintrgb.html). I didn't quite understand how the author produced the same size of population in every generation. I don't see how to generalize that approach to more complex systems.

3. As far as mutation goes in my example, would I just choose a couple of genes at random from a few random candidates and stick in a random number there?

Thanks for the tip on the multiple peaks on the "fitness landscapes". I'm trying to learn how these algorithms work so I can try my hand at optimizing engineering designs later and the fitness landscapes of most engineering problems have multiple peaks.
 
  • #4
For starters, I think your problem is ill formed. This is not a problem well suited to genetic algorithms. Why I would argue this:

1. The thing you are looking for is discrete, rather than in any way continuous.
2. Worse, your fitness landscape is not "smooth". A number either is Fibonacci number n or it isn't; there's no especially good notion of "close to Fibonacci" I'm aware of. Worse, Fibonacci numbers aren't really independently discoverable; the value of Fibonacci n is dependent on Fibonacci n-1

As far as picking a fitness function goes, the idea here is that the fitness function describes sort of "the world as you want it to be", and tells you how close to it you are. So, what describes the world you want? Well, you want a sequence where each F(n) is equal to F(n-1)+F(n-2). So it seems to me the normal thing to do here would have the fitness function test for that property. Maybe the fitness of your sequence is equal to the number of values Sn in the sequence S where Sn=S_(n-1)+S_(n-2). This doesn't work great though because like I said the Fibonacci numbers aren't defined "independently", so maybe you will quickly get a series of numbers that mostly have the "Fibonacci property" but aren't the Fibonacci sequence because value 1 is equal to 100, or something. You will have a hard time getting from there to the true Fibonacci sequence because you will be stuck at a local maxima of the fitness function. Maybe you would get better results if you did something like, "the number of numbers in S, starting with index 0, that have the Fibonacci property". But at this point what you are doing is only barely a genetic algorithm and doesn't offer any of the advantages genetic algorithms can.

schip's suggestion of precomputing the "correct" sequence and testing for distance to that, but at this point it's unclear what the GA adds. Similarly there's a closed form formula that produces the Fibonacci sequence on the integers; this does allow you to independently compute "the nth Fibonacci number", and you could test for distance to this. But this is similar to schip's idea, since in this case you are (1) calculating the nth Fibonacci number (2) throwing the correct answer away (3) having an unnecessary GA fumble toward it...
 
  • #5
I do see how problematic my experiment is. I just figured if I picked the Fibonacci numbers, a random "genome" wouldn't reproduce them so I could see the algorithm slowly converge. I know the GA won't "add" anything here because the solution is obvious. It's just a learning exercise I came up with. If you have a suggestion on a better way to clearly see how GAs work, let me know. I would be glad to change my experiment.
 
  • #6
timthereaper said:
I do see how problematic my experiment is. I just figured if I picked the Fibonacci numbers, a random "genome" wouldn't reproduce them so I could see the algorithm slowly converge. I know the GA won't "add" anything here because the solution is obvious. It's just a learning exercise I came up with. If you have a suggestion on a better way to clearly see how GAs work, let me know. I would be glad to change my experiment.

If that is your goal, then I think using schip's "distance from precomputed correct Fibonacci sequence" approach is a good way to go about it.

An GA "case study" so to speak I saw once that I thought had a problem well-fit to GAs, and where a GA solution can be implemented quickly, was the optimal Steiner graph solver described in these three blog posts:

http://pandasthumb.org/archives/2006/07/target-target-w-1.html
http://pandasthumb.org/archives/2006/08/take-the-design.html
http://pandasthumb.org/archives/2006/08/design-challeng-1.html

One nice thing about that problem is it turned out to exhibit the "multiple peaks" property schip mentioned; the GA sometimes found the optimal solution and sometimes found a locally optimal but globally merely good solution. (EDIT: A warning, each of these posts contains some diversions mostly only relevant in context of a flamewar the author was having with some creationists at the time... if you just ignore those parts though you're left with a fairly useful discussion of GAs).

Also, one thing I've noticed is that images work quite well for GAs. If you're just going to aim for getting a closer and closer sequence to a known target and watch how it changes, then you could use the list of pixel colors in the image as your "genome" and for fitness function choose distance from a source image. This way if you compare your generation by generation you will get pretty (if useless) pictures...
 
Last edited:
  • #7
I only have a glib understanding of GA's, but I would expect one needs a population size somewhat larger than the number of elements or "genes" so that one doesn't lose good candidates. Surely there are some rules of thumb out there?

One thing I notice, now, is that you don't mention deleting poorly performing elements. Maybe that's why you have ever-expanding populations? In keeping with the biological analogy, perhaps you can enforce a resource limit and end up with a logistic equation...

For mutation, yes it sounds like inserting a couple random numbers once in a while might do the trick. I think you might also use slowly changing values to do local searches, but this runs into what is now called Mimetic computing -- a fancy name for whatever-works.

As to pre-computing the "correct" sequence, most fitness functions presume some idea of what the correct result might be. When adjusting weights while training a neural net you compare the output to the desired values (say, for a character recognizer) and feedback that fitness. This particular problem has a more "obvious" answer, and is perhaps not the best for a GA, but is not -- IMHO -- fundamentally different in quality.

For further study, you might like the Wolpert/MacReady No Free Lunch paper:
http://www.santafe.edu/media/workingpapers/95-02-010.pdf
which proves that no algorithm will work better than a random search on every fitness landscape -- including the zillions which have no structure whatsoever. This got picked up and expanded by the previously mentioned "creationists", e.g., Dembski, as proof that Evolution couldn't work. The funny thing is, they completely miss the point that the natural environment is a set of specific fitness landscapes which have a (large) number of (sorta) optimum "solutions", and that these sorts of landscapes are exactly what GA's are good at.
 
Last edited:

1. What is a fitness function?

A fitness function is a mathematical function used in genetic algorithms to evaluate the suitability of a potential solution. It assigns a score or fitness value to each individual in a population, which is used to guide the evolution of the population towards a better solution.

2. How do I design a fitness function for my genetic algorithm?

The design of a fitness function depends on the specific problem you are trying to solve with your genetic algorithm. Generally, you will need to identify the key criteria or goals for your solution and translate them into a mathematical formula. It is important to strike a balance between a fitness function that is too simple and one that is overly complex, as both can lead to suboptimal results.

3. What is the role of a fitness function in a genetic algorithm?

A fitness function is a crucial component of a genetic algorithm, as it determines which solutions are most likely to contribute to the next generation. By assigning fitness values to individuals, it provides a metric for evaluating and comparing potential solutions, allowing the algorithm to evolve towards a better solution over multiple generations.

4. Can I change the fitness function during the evolution process?

Yes, it is possible to change the fitness function during the evolution process. This can be useful if you discover a flaw in your initial fitness function or if you want to incorporate new information or goals into the algorithm. However, it is important to note that changing the fitness function can significantly impact the evolution of the population and should be done carefully.

5. How do I know if my fitness function is effective?

The effectiveness of a fitness function is determined by its ability to lead the genetic algorithm towards a good solution. One way to evaluate the effectiveness is by comparing the performance of the algorithm using different fitness functions. Additionally, you can also analyze the convergence rate and fitness values of the best individuals in each generation to assess the effectiveness of the fitness function.

Similar threads

  • Programming and Computer Science
Replies
3
Views
1K
Replies
9
Views
1K
  • Programming and Computer Science
Replies
6
Views
1K
  • Programming and Computer Science
Replies
7
Views
2K
  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
Replies
4
Views
3K
  • Programming and Computer Science
Replies
29
Views
3K
  • Programming and Computer Science
Replies
1
Views
3K
  • Quantum Interpretations and Foundations
Replies
2
Views
1K
Replies
1
Views
2K
Back
Top