Neural Networks -- using the NN classifier as the fitness function

  • #1
DavidSnider
Gold Member
485
130

Main Question or Discussion Point

Let's say you've trained a neural network and it can classify cat images with good (90%) accuracy.

Now let's say you use a genetic algorithm and start 'breeding' images together using the NN classifier as the fitness function.

Will you eventually start generating pictures that look like cats?
 

Answers and Replies

  • #2
.Scott
Homework Helper
2,455
852
Let's say you've trained a neural network and it can classify cat images with good (90%) accuracy.

Now let's say you use a genetic algorithm and start 'breeding' images together using the NN classifier as the fitness function.

Will you eventually start generating pictures that look like cats?
You will get images that look like a cat to the NN. The chance that they will look like cats to a person is about 0%.

The genetic algorithms will tend to concentrate on the most reliable cat indicators identified by the NN. That is to say: indicators coded in the NN that most reliably trigger a cat identification. So if the NN tends to rely on cat fur texture, the genetic algorithms will tend to put that texture into its images - but not necessarily in a way that resembles actual cat fur. And the location of the "fur" in the image and it position relative to other image elements will be consistent with what the NN expects for cat pictures in general - but not necessarily for any specific cat image.

The result will also reflect as much about what the NN has been trained to be not a cat as it has been for what is a cat. So if quite a few of the non-cat images included seashore scenes, but none of the cat images did - the identifiers for seashore scenes will be missing from the generate images.
 
Last edited:
  • #3
DavidSnider
Gold Member
485
130
Interesting. So I guess Neural Networks are pretty good at spotting cats relative to the pictures similar to the ones it has been trained on, but in the world of possible images they are still pretty much clueless. I'm guessing 'training out' Genetic Algorithm generated positives wouldn't be very useful because there is infinite variation?
 
  • #4
.Scott
Homework Helper
2,455
852
So I guess Neural Networks are pretty good at spotting cats relative to the pictures similar to the ones it has been trained on, but in the world of possible images they are still pretty much clueless.
They attempt to identify image features that distinguish between cat and non-cat, based on the training set. Those image features can be anything - but normally a convolutional NN is used. A convolution will tend to highlight selected spacial frequencies in the image. So one might tend to catch sharp edges while another identifies smoother transitions. The NN identifies patterns in the training set from this spacial frequency data that have often been cats and seldom anything else.
So we have to be careful when we use the word "similar". What the NN considers similar may not be what the human brain recognizes as a similarity.

I'm guessing 'training out' Genetic Algorithm generated positives wouldn't be very useful because there is infinite variation?
OK, but there was not infinite variation in the non-cat part of the training set. The genetic algorithms may identify "poison" patterns, ones that will always be rejected as non-cat. Since generating an image with such a pattern is likely to kill the algorithm, algorithms that check for the poison and fix it before finishing the image will tend to survive.
 
  • #5
DavidSnider
Gold Member
485
130
OK, but there was not infinite variation in the non-cat part of the training set. The genetic algorithms may identify "poison" patterns, ones that will always be rejected as non-cat. Since generating an image with such a pattern is likely to kill the algorithm, algorithms that check for the poison and fix it before finishing the image will tend to survive.
I'm not sure I follow this.

We train the NN on a set of images, some of the images humans have labeled as cats, some labeled non-cat, but all 'normal' photos. Then we get a GA to converge on a bunch of images that the NN thinks with high probability are cat-like. A human looks at these images and they look more or less like noise (non-cat like by human standards for sure Is this what you mean by 'poison pattern'?) . Then we add these images to the training set labeled as non-cat and retrain.

How would expect this to affect the new NN's ability to label images 'cat' that humans would also label as 'cat'?

When we re-run GA on this new NN are we're still going to converge on data that looks like noise to a human?
 
Last edited:
  • #6
.Scott
Homework Helper
2,455
852
I'm not sure I follow this.

We train the NN on a set of images, some of the images humans have labeled as cats, some labeled non-cat, but all 'normal' photos. Then we get a GA to converge on a bunch of images that the NN thinks with high probability are cat-like. A human looks at these images and they look more or less like noise (non-cat like by human standards for sure Is this what you mean by 'poison pattern'?) .
Let's say you have 100 cat images and 100 non-cat images. And you have several convolution filters and say filter number 3 is a mid-pass filter (perhaps a wavelength of 6 pixels). And let's say that as it happens, one of you "max" values from this filter is very telling. Call this F3M9. We get an F3M9 that is less than 4 for 70% of the non-cat images and a value greater than 10 for every cat image. With that type of input to the NN, the NN is going to key off that F3M9 value. If it sees F3M9<10, it will weigh it as very unlikely to be a cat.
So F3M9<10 would become a "poison pattern". If it is there, it mustn't be a cat. So a GA could eventually learn to check for this pattern - so it could avoid it.

It is important to note that we have to add something to our GA survival formula that prevents an algorithm from finding a winning cat image and then just repeating that image unchanged, over and over again - knowing that it will always get a win.

Ideally, our GA should eventually evolve the ability to mimic the NN process - at least well enough to score wins every time.

Then we add these images to the training set labeled as non-cat and retrain.
How would expect this to affect the new NN's ability to label images 'cat' that humans would also label as 'cat'?
When we re-run GA on this new NN are we're still going to converge on data that looks like noise to a human?
So you would have your original 100 cat images and 100 non-cat images.
Then say let you GA "evolve" overnight and in the morning, you collect 1000 images.
You review all 1000 and you don't like any of them. So you add them to the training set. Now you have 100 cats and 1100 non-cats in your training set.

Then say let you GA "evolve" overnight and in the morning, you collect 1000 images.
You review all 1000 and one kind of looks like a cat. So you add them to the training set. Now you have 101 cats and 2099 non-cats in your training set.

And so on.

The problem would be the information capacity of your NN. If it could store all of the original 100 images in its programming (in terms of its learning capacity), you would tend to drive it towards that state - except that you would occasionally find new images of cats.
 
  • #7
727
168
Google had a NN viewer at some stage that let you see what each layer of an NN looked like, it was quite surreal. I'll see if I can find a link.

Cheers
 

Related Threads on Neural Networks -- using the NN classifier as the fitness function

Replies
22
Views
2K
Replies
1
Views
2K
  • Last Post
Replies
2
Views
1K
Replies
4
Views
609
  • Last Post
Replies
3
Views
4K
  • Last Post
Replies
1
Views
3K
  • Last Post
Replies
1
Views
623
  • Last Post
Replies
3
Views
2K
  • Last Post
Replies
5
Views
2K
Replies
11
Views
3K
Top