SotirisD
- 1
- 0
So I am experimenting with different configurations of multilayer perceptrons in Matlab and my training data are extracted from images which I want to classify.
-I am currently using adaptive learning with momentum backpropagation (traingdx) setting different initial learning rates.What I get is that for low values I have a pretty good results but when the initial rate gets bigger the accuracy of my model drops dramatically.How can this be explained?
-Another question I have is how different output activation functions can affect your model.Are there some heuristics for this or just trial and error? For example I get good results with {'tansig', 'tansig', 'purelin'}, {'tansig', 'tansig', 'tansig'} but {'tansig', 'tansig', 'logsig'} fails, I suspect it has to do with negative values getting zeroed by logsig.
-I am currently using adaptive learning with momentum backpropagation (traingdx) setting different initial learning rates.What I get is that for low values I have a pretty good results but when the initial rate gets bigger the accuracy of my model drops dramatically.How can this be explained?
-Another question I have is how different output activation functions can affect your model.Are there some heuristics for this or just trial and error? For example I get good results with {'tansig', 'tansig', 'purelin'}, {'tansig', 'tansig', 'tansig'} but {'tansig', 'tansig', 'logsig'} fails, I suspect it has to do with negative values getting zeroed by logsig.