First of all, thank you for developing SimBrain, it's really impressive.
I've been playing with backprop networks for some time. They work well with digit recognition. But when I try to train network with simple sin function in [0,6] range, I get very poor results. The configuration of network is 1-5-1. According to literature, this should be easy task for this kind of network.
What am I doing wrong?
I attach network configuration.
Hi, sorry for the delayed response.
I also got 0.0327 error, which I think is high. The function that network implements is shown on attached graph. As you can see, it is monotonic. It seems the network can't achieve non-monotonic function, that is why it can't learn sin wave properly. I would expect such a behaviour with one neuron in hidden layer (the network output resembles logistic function), but with 5 neurons I thought that network should be able to implement more sophisticated function.
This is interesting. I need to read up on this a bit and try a few things out myself. Forgive me if it takes a while to get to this. I'm pretty busy now. Perhaps someone else reading this will have other insights in the mean time.
One thing you could try, just to see if it's Simbrain or something intrinsic to the problem, is to try the same test using another system, and to compare the results. Matlab, if you have it, or perhaps scikitlearn.
Well, it seems the problem is in Simbrain. I just fed the same data to JustNN and it produced the expected results. I attach graph with target data and output of two networks: with 2 and 5 neurons in hidden layer.
As I expected, the network with 5 neurons is able to approximate almost exact training data, while the network with 2 neurons is not able to do that exactly. Still, even in case of 2 neurons the shape of the function resembles the original, which I couldn't achieve with Simbrain.
Thanks Roy, this is very helpful to know about. We've been planning an overhaul of this part of the software for a while know, and are planning to work on it this summer. This will be a helpful test case then.
Any update on this topic? I'm facing the same issue Roy reported. I tried to play with learning rate and momentum, increase the number of neurons but same results. It seems network with 1 input and 1 output hardly converges when trying to approximate any function.
Thank you very much for your good job with this tool.
Ooops I missed this in the digest. We are planning to work on backprop this winter break actually (we did work on it last summer, but never finished) so hopefully in 3.03 or 3.04 release by end of break