I am writing to understand the response of this function “NeuralNetworkTrain”.
Every time, I run this function with the same values of keywords (for example: NeuralNetworkTrain input=in, output=out, nhidden=3, Momentum=0.075, Iterations=10000, learningrate=0.15, I get different Weights, so I obtain different W_NNResult.
I would be grateful if someone could answer me if it is normal to obtain different Weights.
There are various aspects of neural networks that come into play here:
The training of the network involves initialization by a random set of weights. This means that, in general, you will get a different weights unless the computed weights happened to converge onto a single solution -- something that is not likely in many applications.
Keep in mind that just because your weights appear to be different it does not mean that the final result, i.e., the application of the weights to the designed input would result in different outcome. In this case I don't expect exact numerical equality in W_NNResult between consecutive runs. Instead, I look for a combination of values that expresses a pattern which you get by a process e.g., thresholding the output etc.
The last point that you need to check is if the iterations you ran lead to a "converged" solution. You may want to re-run the training using the first set of weights as initial inputs but using different momentum and learning rate.
I hope this helps,
Thank you for the answer and the neural network-Igor converges very well and predicts very good unknown data.
I’m familiar to use Igor-ANN to predict my data, but I need to change the activation function. It’s possible to change the activation function for the output neurons, for example to use threshold function (0 or 1) or tang-sigmoid function (-1 to 1) instead of sigmoid function (0 to 1).
The current pair of NN operations has no mechanism to change the choice/range of threshold function.
Thanks for the answer
Back to top