Neural Networks -- using the NN classifier as the fitness function

In summary, a trained neural network can classify cat images with high accuracy, but when used with a genetic algorithm to breed images together, the resulting images may not resemble actual cats. This is because the genetic algorithm focuses on the most reliable cat indicators identified by the neural network, which may not necessarily produce images that look like cats to a human. Additionally, the neural network may also reflect what it has been trained to identify as not a cat, further affecting the images generated by the genetic algorithm. Training out genetic algorithm generated positives may not be useful as there is infinite variation, but the genetic algorithm may identify "poison" patterns that are always rejected as non-cat, allowing for improved performance in image classification.
  • #1

DavidSnider

Gold Member
511
146
Let's say you've trained a neural network and it can classify cat images with good (90%) accuracy.

Now let's say you use a genetic algorithm and start 'breeding' images together using the NN classifier as the fitness function.

Will you eventually start generating pictures that look like cats?
 
Technology news on Phys.org
  • #2
DavidSnider said:
Let's say you've trained a neural network and it can classify cat images with good (90%) accuracy.

Now let's say you use a genetic algorithm and start 'breeding' images together using the NN classifier as the fitness function.

Will you eventually start generating pictures that look like cats?
You will get images that look like a cat to the NN. The chance that they will look like cats to a person is about 0%.

The genetic algorithms will tend to concentrate on the most reliable cat indicators identified by the NN. That is to say: indicators coded in the NN that most reliably trigger a cat identification. So if the NN tends to rely on cat fur texture, the genetic algorithms will tend to put that texture into its images - but not necessarily in a way that resembles actual cat fur. And the location of the "fur" in the image and it position relative to other image elements will be consistent with what the NN expects for cat pictures in general - but not necessarily for any specific cat image.

The result will also reflect as much about what the NN has been trained to be not a cat as it has been for what is a cat. So if quite a few of the non-cat images included seashore scenes, but none of the cat images did - the identifiers for seashore scenes will be missing from the generate images.
 
Last edited:
  • #3
Interesting. So I guess Neural Networks are pretty good at spotting cats relative to the pictures similar to the ones it has been trained on, but in the world of possible images they are still pretty much clueless. I'm guessing 'training out' Genetic Algorithm generated positives wouldn't be very useful because there is infinite variation?
 
  • #4
DavidSnider said:
So I guess Neural Networks are pretty good at spotting cats relative to the pictures similar to the ones it has been trained on, but in the world of possible images they are still pretty much clueless.
They attempt to identify image features that distinguish between cat and non-cat, based on the training set. Those image features can be anything - but normally a convolutional NN is used. A convolution will tend to highlight selected spatial frequencies in the image. So one might tend to catch sharp edges while another identifies smoother transitions. The NN identifies patterns in the training set from this spatial frequency data that have often been cats and seldom anything else.
So we have to be careful when we use the word "similar". What the NN considers similar may not be what the human brain recognizes as a similarity.

DavidSnider said:
I'm guessing 'training out' Genetic Algorithm generated positives wouldn't be very useful because there is infinite variation?
OK, but there was not infinite variation in the non-cat part of the training set. The genetic algorithms may identify "poison" patterns, ones that will always be rejected as non-cat. Since generating an image with such a pattern is likely to kill the algorithm, algorithms that check for the poison and fix it before finishing the image will tend to survive.
 
  • #5
.Scott said:
OK, but there was not infinite variation in the non-cat part of the training set. The genetic algorithms may identify "poison" patterns, ones that will always be rejected as non-cat. Since generating an image with such a pattern is likely to kill the algorithm, algorithms that check for the poison and fix it before finishing the image will tend to survive.

I'm not sure I follow this.

We train the NN on a set of images, some of the images humans have labeled as cats, some labeled non-cat, but all 'normal' photos. Then we get a GA to converge on a bunch of images that the NN thinks with high probability are cat-like. A human looks at these images and they look more or less like noise (non-cat like by human standards for sure Is this what you mean by 'poison pattern'?) . Then we add these images to the training set labeled as non-cat and retrain.

How would expect this to affect the new NN's ability to label images 'cat' that humans would also label as 'cat'?

When we re-run GA on this new NN are we're still going to converge on data that looks like noise to a human?
 
Last edited:
  • #6
DavidSnider said:
I'm not sure I follow this.

We train the NN on a set of images, some of the images humans have labeled as cats, some labeled non-cat, but all 'normal' photos. Then we get a GA to converge on a bunch of images that the NN thinks with high probability are cat-like. A human looks at these images and they look more or less like noise (non-cat like by human standards for sure Is this what you mean by 'poison pattern'?) .
Let's say you have 100 cat images and 100 non-cat images. And you have several convolution filters and say filter number 3 is a mid-pass filter (perhaps a wavelength of 6 pixels). And let's say that as it happens, one of you "max" values from this filter is very telling. Call this F3M9. We get an F3M9 that is less than 4 for 70% of the non-cat images and a value greater than 10 for every cat image. With that type of input to the NN, the NN is going to key off that F3M9 value. If it sees F3M9<10, it will weigh it as very unlikely to be a cat.
So F3M9<10 would become a "poison pattern". If it is there, it mustn't be a cat. So a GA could eventually learn to check for this pattern - so it could avoid it.

It is important to note that we have to add something to our GA survival formula that prevents an algorithm from finding a winning cat image and then just repeating that image unchanged, over and over again - knowing that it will always get a win.

Ideally, our GA should eventually evolve the ability to mimic the NN process - at least well enough to score wins every time.

DavidSnider said:
Then we add these images to the training set labeled as non-cat and retrain.
How would expect this to affect the new NN's ability to label images 'cat' that humans would also label as 'cat'?
When we re-run GA on this new NN are we're still going to converge on data that looks like noise to a human?

So you would have your original 100 cat images and 100 non-cat images.
Then say let you GA "evolve" overnight and in the morning, you collect 1000 images.
You review all 1000 and you don't like any of them. So you add them to the training set. Now you have 100 cats and 1100 non-cats in your training set.

Then say let you GA "evolve" overnight and in the morning, you collect 1000 images.
You review all 1000 and one kind of looks like a cat. So you add them to the training set. Now you have 101 cats and 2099 non-cats in your training set.

And so on.

The problem would be the information capacity of your NN. If it could store all of the original 100 images in its programming (in terms of its learning capacity), you would tend to drive it towards that state - except that you would occasionally find new images of cats.
 
  • Like
Likes Borg and DavidSnider
  • #7
Google had a NN viewer at some stage that let you see what each layer of an NN looked like, it was quite surreal. I'll see if I can find a link.

Cheers
 

1. What is a neural network classifier?

A neural network classifier is a type of machine learning algorithm that is designed to identify patterns and relationships within a dataset to make predictions or classify data into different categories. It works by mimicking the structure and function of the human brain, with interconnected nodes that process and transmit information.

2. How is the NN classifier used as a fitness function?

The NN classifier can be used as a fitness function in evolutionary algorithms, where the classifier is used to evaluate the performance of potential solutions. The classifier assigns a fitness score to each solution based on how well it performs on a specific task, and this score is used to guide the evolution of the solutions towards better performance.

3. What are the advantages of using a neural network classifier as a fitness function?

One of the main advantages of using a neural network classifier as a fitness function is its ability to handle complex and non-linear relationships within the data. This allows for more accurate evaluation and selection of solutions, leading to better performance. Additionally, neural networks can adapt and learn from the data, making them suitable for a wide range of applications.

4. Are there any limitations to using the NN classifier as a fitness function?

One limitation of using a neural network classifier as a fitness function is that it requires a large amount of data to train and fine-tune the network. This can be time-consuming and computationally expensive. Additionally, the performance of the classifier is highly dependent on the quality and diversity of the training data.

5. How can the performance of a NN classifier be improved as a fitness function?

To improve the performance of a neural network classifier as a fitness function, it is important to carefully select and prepare the training data. This can involve techniques such as data augmentation and feature engineering to increase the diversity and quality of the data. Additionally, adjusting the architecture and parameters of the neural network can also help improve its performance as a fitness function.

Suggested for: Neural Networks -- using the NN classifier as the fitness function

Back
Top