I trained a backprop network using an input file with 16 rows, and an output file of 16 rows.
Inputs and outputs were all binary values.
The network learned the input-output combinations perfectly, the error was zero.
When cycling to the input file again, after training completion, all inputs produced the perfect outputs.
Then I made a copy of the input file, saved it with a new name, and kept only the first 5 rows (deleted the remaining rows). When I uploaded this file in the Validate Input Data tab, the output was NOT perfect anymore.
How is this possible? Am I overlooking something, or is this a bug?
The validate input tab is fairly counterintuitive, and based on how many people want to use simbrain for backprop, and the headaches this is causing, I'm definitely going to be changing that... at some point.
The easiest way to do this would be to train your network again, and then _not_ use the validate input tab. Go to the actual input layer of the network (Layer 1), and double click on the yellow interaction box. click on the set inputs tab, load your truncated data there, and then test using the step button there. If training was 0, you should continue to get good results. (The problem with the validate inputs tab is that it just copies what's in the input data tab; this is fine for a quick check that backprop worked properly, but not for any kind of real validation where you test how your results generalize to new data).
If that fails post your data or email me and we can try to figure it out.
My Spring break is coming up, and at the top of my list is to make a video on how to do this type of thing. Others have been asking for it here and it's been a few months. I feel confident I can post a video in the next 2 weeks.
As for improving the interface for backprop, that could take quite a bit longer, as we are doing a bunch of other things in Simbrain now, including a pretty cool 3d world. Perhaps this summer. Or earlier if anyone's interested in working on it with me.
Sorry, I tried it, but it did not work.
The 1st row of my truncated data set gave the same wrong answer.
During training with this row, the network associated the right answer to this row.
If you have time to replicate the problem, here are the details.
3 layer back propagation network. 2-24-1 neurons.
Input consists of 4 binary values.
If exactly one of the inputs is 1, then the output is 1, else the output is 0.
I initially got an error of .0625 too, but if you re-randomize the weights several times, eventually it will find a state with error zero. Once the error is zero, you have to stop the training, because sometimes, the error shoots up again.
The workspace you sent seemed to have recurrent connections in the hidden layer. In the few times I've tried I could not get 0 error with this training set, and that first row seems to be the culprit. But I've been rushing through various tasks so maybe I just didn't try hard enough. If you can get a network with no recurrent connections to do this, please send me the workspace. Cheers,