Hi, I just recently found SimBrain and I love it. Thanks for all of your work. I have gone over all of the docs and videos about SimBrain that I can find and I'm left a little confused on my currently task. Could you please make a video ASAP on how to train a backprop network. I'll be sure to watch it five times!
I'm new to neural nets and one part of the backprop training that is confusing me is results I'm getting from a default backprop setup. I am using a 1x5x3. One output, 5 hidden and 3 inputs. The inputs are three currency pairs reduced to a single value. Example, the first input is a list of EUR/USD close values in days 1.1223, 1.1323, 1.1418..., So the inputs are all linear. Now the hidden layer and output layer default to sigmoidal (discrete), which I keep, but I don't understand the results. The output is almost always close to 1. What does this mean? This is how close to the output backprop machine is? Ok, how do I convert that back to a linear number? In my data set the the output is just the next row of the input so the input and output data are the same except output is missing the first day of data. If you can lend a thought that would be great, and if you have time for a video on backprop that would be great too!
Hi William, thanks for the note. I'll try to get back to you soon on your question (busy today), and time permitting, will upload a short backprop video, hopefully by early next week (I have a few other videos I have to make first, so it may take me a bit longer).
As to your question, could you paste in a bit of your training data and explain in more detail how it works? One thing that is not clear to me is why there are 3 inputs and one output.
Another thing to note is that Simbrain is expecting inputs and targets between 0 and 1. It's usually a good idea to rescale or normalize your data so that it is in this range, or in the range -1 to 1. You can also double click on any layer to change the expected upper and lower bounds for those nodes.
For predicting the next item in a time series, special networks are often used. In Simbrain we have "simple recurrent networks" which can be used for this. The Simbrain components should work fine for small tasks, but as noted in the video, it's all pretty bare bones. For something that will work on more complex tasks using large datasets, you can check out theano (http://deeplearning.net/software/theano/), for example.
the video is fantastic and clears up some of my confusion, I look forward to more videos. And thanks for the "simple recurrent networks", that's more in the direction of what I'm looking for and explains to me why my results made no sense.
Now I'm using a "simple recurrent network" with this data set:
These represent whether a currency pair closed above or below the previous closed day. 1 means the currency pair closed higher than the previous close and -1 means the currency closed lower than the previous close. To get the target data just remove the first row in the data set above.
A couple questions:
1) After I train my simple recurrent network to the above data, how do I use it to make a real prediction?
*UPDATE: Got it! Uncheck the iteration mode function and you're in real time!
2) I understand what the learning rate means(I think), it's the step size the cost function takes to get to a minimum. I enrolled in a machine learning course at Coursera :)_ But what does momentum do, something akin to how hard the ball falls into a valley?
Thanks again for you time Jeff, you have help me a great deal in my learning of machine learning!
Hi again. Glad it's working out! Andrew Ng's coursera course is excellent for backprop. I think he mentions momentum at some point, and he is really an excellent source on these topics. Momentum is basically a way to make the "steps" taken in gradient descent adaptive, so that when the error surface is big and flat the step sizes are larger, while when it is changing more rapidly the step sizes are smaller.
There is not, to my knowledge, any simple systematic way to determine what momentum to use. It's really a matter of trial and error what momentum / learning rate combo works best. Perhaps others know of some good source on this topic?
As I've said a few times, this is a minimal implementation of backprop (and that applies to the SRN too). There are all kinds of techniques to speed backprop up and make it more robust that we have yet to apply to Simbrain.
If you want to go even further there is something called backprop through time, and here is a tool / post that achieved amazing results with it (and a few other things):
He used various command line tools to run this kind of recurrent neural network and it is pretty impressive what he was able to get. We hope to implement versions of some of this in Simbrain in coming, well, years...