Neural Networks -- using the NN classifier as the fitness function

Click For Summary

Discussion Overview

The discussion revolves around the use of a neural network (NN) classifier as a fitness function in a genetic algorithm (GA) to generate images that resemble cats. Participants explore the implications of this approach, particularly regarding the NN's ability to classify images and the potential outcomes of the GA's image generation process.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants suggest that while a NN can classify cat images with high accuracy, it may not generate images that humans perceive as resembling cats, as the GA may focus on features that are not visually coherent to humans.
  • Others argue that the NN's training set influences its classification, and the GA may produce images that reflect the NN's learned indicators rather than actual cat likeness.
  • A participant raises the concern that the NN's understanding of similarity is limited to its training data, which may not align with human perception.
  • There is a discussion about "poison patterns," which are features that the NN learns to associate with non-cat images, potentially leading the GA to avoid these patterns in its generated images.
  • Some participants question how adding GA-generated images, which may appear as noise to humans, to the training set would affect the NN's future classification capabilities.
  • Concerns are raised about the NN's information capacity and whether it can adapt to new images of cats while maintaining its ability to distinguish them from an increasing number of non-cat images.

Areas of Agreement / Disagreement

Participants express differing views on the effectiveness of using a NN classifier as a fitness function in a GA. There is no consensus on whether the generated images will resemble cats to humans or how the NN's training set will influence its future performance.

Contextual Notes

Limitations include the NN's reliance on its training set, the potential for generating images that do not align with human perception, and the challenges posed by "poison patterns" in the training data.

DavidSnider
Gold Member
Messages
511
Reaction score
147
Let's say you've trained a neural network and it can classify cat images with good (90%) accuracy.

Now let's say you use a genetic algorithm and start 'breeding' images together using the NN classifier as the fitness function.

Will you eventually start generating pictures that look like cats?
 
Technology news on Phys.org
DavidSnider said:
Let's say you've trained a neural network and it can classify cat images with good (90%) accuracy.

Now let's say you use a genetic algorithm and start 'breeding' images together using the NN classifier as the fitness function.

Will you eventually start generating pictures that look like cats?
You will get images that look like a cat to the NN. The chance that they will look like cats to a person is about 0%.

The genetic algorithms will tend to concentrate on the most reliable cat indicators identified by the NN. That is to say: indicators coded in the NN that most reliably trigger a cat identification. So if the NN tends to rely on cat fur texture, the genetic algorithms will tend to put that texture into its images - but not necessarily in a way that resembles actual cat fur. And the location of the "fur" in the image and it position relative to other image elements will be consistent with what the NN expects for cat pictures in general - but not necessarily for any specific cat image.

The result will also reflect as much about what the NN has been trained to be not a cat as it has been for what is a cat. So if quite a few of the non-cat images included seashore scenes, but none of the cat images did - the identifiers for seashore scenes will be missing from the generate images.
 
Last edited:
Interesting. So I guess Neural Networks are pretty good at spotting cats relative to the pictures similar to the ones it has been trained on, but in the world of possible images they are still pretty much clueless. I'm guessing 'training out' Genetic Algorithm generated positives wouldn't be very useful because there is infinite variation?
 
DavidSnider said:
So I guess Neural Networks are pretty good at spotting cats relative to the pictures similar to the ones it has been trained on, but in the world of possible images they are still pretty much clueless.
They attempt to identify image features that distinguish between cat and non-cat, based on the training set. Those image features can be anything - but normally a convolutional NN is used. A convolution will tend to highlight selected spatial frequencies in the image. So one might tend to catch sharp edges while another identifies smoother transitions. The NN identifies patterns in the training set from this spatial frequency data that have often been cats and seldom anything else.
So we have to be careful when we use the word "similar". What the NN considers similar may not be what the human brain recognizes as a similarity.

DavidSnider said:
I'm guessing 'training out' Genetic Algorithm generated positives wouldn't be very useful because there is infinite variation?
OK, but there was not infinite variation in the non-cat part of the training set. The genetic algorithms may identify "poison" patterns, ones that will always be rejected as non-cat. Since generating an image with such a pattern is likely to kill the algorithm, algorithms that check for the poison and fix it before finishing the image will tend to survive.
 
.Scott said:
OK, but there was not infinite variation in the non-cat part of the training set. The genetic algorithms may identify "poison" patterns, ones that will always be rejected as non-cat. Since generating an image with such a pattern is likely to kill the algorithm, algorithms that check for the poison and fix it before finishing the image will tend to survive.

I'm not sure I follow this.

We train the NN on a set of images, some of the images humans have labeled as cats, some labeled non-cat, but all 'normal' photos. Then we get a GA to converge on a bunch of images that the NN thinks with high probability are cat-like. A human looks at these images and they look more or less like noise (non-cat like by human standards for sure Is this what you mean by 'poison pattern'?) . Then we add these images to the training set labeled as non-cat and retrain.

How would expect this to affect the new NN's ability to label images 'cat' that humans would also label as 'cat'?

When we re-run GA on this new NN are we're still going to converge on data that looks like noise to a human?
 
Last edited:
DavidSnider said:
I'm not sure I follow this.

We train the NN on a set of images, some of the images humans have labeled as cats, some labeled non-cat, but all 'normal' photos. Then we get a GA to converge on a bunch of images that the NN thinks with high probability are cat-like. A human looks at these images and they look more or less like noise (non-cat like by human standards for sure Is this what you mean by 'poison pattern'?) .
Let's say you have 100 cat images and 100 non-cat images. And you have several convolution filters and say filter number 3 is a mid-pass filter (perhaps a wavelength of 6 pixels). And let's say that as it happens, one of you "max" values from this filter is very telling. Call this F3M9. We get an F3M9 that is less than 4 for 70% of the non-cat images and a value greater than 10 for every cat image. With that type of input to the NN, the NN is going to key off that F3M9 value. If it sees F3M9<10, it will weigh it as very unlikely to be a cat.
So F3M9<10 would become a "poison pattern". If it is there, it mustn't be a cat. So a GA could eventually learn to check for this pattern - so it could avoid it.

It is important to note that we have to add something to our GA survival formula that prevents an algorithm from finding a winning cat image and then just repeating that image unchanged, over and over again - knowing that it will always get a win.

Ideally, our GA should eventually evolve the ability to mimic the NN process - at least well enough to score wins every time.

DavidSnider said:
Then we add these images to the training set labeled as non-cat and retrain.
How would expect this to affect the new NN's ability to label images 'cat' that humans would also label as 'cat'?
When we re-run GA on this new NN are we're still going to converge on data that looks like noise to a human?

So you would have your original 100 cat images and 100 non-cat images.
Then say let you GA "evolve" overnight and in the morning, you collect 1000 images.
You review all 1000 and you don't like any of them. So you add them to the training set. Now you have 100 cats and 1100 non-cats in your training set.

Then say let you GA "evolve" overnight and in the morning, you collect 1000 images.
You review all 1000 and one kind of looks like a cat. So you add them to the training set. Now you have 101 cats and 2099 non-cats in your training set.

And so on.

The problem would be the information capacity of your NN. If it could store all of the original 100 images in its programming (in terms of its learning capacity), you would tend to drive it towards that state - except that you would occasionally find new images of cats.
 
  • Like
Likes   Reactions: Borg and DavidSnider
Google had a NN viewer at some stage that let you see what each layer of an NN looked like, it was quite surreal. I'll see if I can find a link.

Cheers
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 18 ·
Replies
18
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
1
Views
2K
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K