- #1
CraigH
- 222
- 1
This dilemma seems to occurs all the time in so many different engineering problems. It seems impossible to optimise something with multiple parameters, for three reasons:
Additional Details
The neural network example is the problem I am currently having, but I'll give another example of this problem I have had in the past. I was trying to design a patch antenna that has a resonant frequency of 2GHz. The resonant frequency is mainly dependant on the width of the patch, and the gain of the antenna was mainly dependant on the insertion depth. In CST, I performed a sweep on the width and picked the value that gave the lowest s parameter at 2GHz, I then performed a sweep on the insertion depth and picked the value that reduces the S parameter to the lowest value (I decided that -40dB was acceptable). But this then changed the frequency at which this gain happens, so I did a sweep on the width again, and I picked the value that gave the lowest s parameter, but now the s parameter is too high again, so I optimise the insertion depth... and so on.
- There are too many possible combinations of these parameters to be able to simulate them all
- You optimise one parameter at a time, but then you don't know if your final result is the best possible result. E.g: You start with a neural network with 5 layers, and 4 neurons in each layer. You perform a sweep on the number of layers in the neural network, plotting the accuracy of the network vs number of layers. You find that the optimum number is 3 Layers. You then optimise the number of neurons in the first layer, and find that the optimum is 10 neurons, then you optimise the number in the second layer and find 7 is the best, and then the number in the third and find that 6 is the best. However, this might not be the best overall solution: For example the accuracy of the network might be better if you start by using 3 neurons in the first layer, which is not the optimum, and then optimise the number in the second and third layer, finding that you get a different number of neurons but a much better accuracy compared to the first method.
- You optimise parameter 1 that governs property X, and then you optimise parameter 2 that governs property Y, but then this has changed property X, so you go back and optimise the parameter 1, but then this changes property Y.
Additional Details
The neural network example is the problem I am currently having, but I'll give another example of this problem I have had in the past. I was trying to design a patch antenna that has a resonant frequency of 2GHz. The resonant frequency is mainly dependant on the width of the patch, and the gain of the antenna was mainly dependant on the insertion depth. In CST, I performed a sweep on the width and picked the value that gave the lowest s parameter at 2GHz, I then performed a sweep on the insertion depth and picked the value that reduces the S parameter to the lowest value (I decided that -40dB was acceptable). But this then changed the frequency at which this gain happens, so I did a sweep on the width again, and I picked the value that gave the lowest s parameter, but now the s parameter is too high again, so I optimise the insertion depth... and so on.