Time series forecasting with Deductor Academic 5.2 - page 5

 
It shows everything, turns and strength.
 
It all depends on the frames and settings . 70% to 95%.
 
AAAksakal:
It all depends on the frames and settings . 70% to 95%.
Somewhere like that but on strong news alas.
 
He doesn't care about the news either. I get the news from Ded's forecast.
 
AAAksakal:
Yes everything he shows and reversals and strength .

The evidence is very welcome. It is big news that it is possible to predict on unsteady BP sections. You are the only one who claims this, I am not familiar with others.
 
It is not thankful to prove anything, in fact it is very difficult to make good forecasts. Very many factors influence the creation of an accurate forecast, for example the following thing, if you make a forecast it is better to do in the beginning of the day, you should not make a forecast in the middle of trading sessions. you can make a forecast when one session has played out completely but, the window of history dive should be moved back 24 hours +1 at trading session..... The best forecasts are obtained for 5 min., neuron.net, as I hate to admit it, I can't stand it as 95% of it is rubbish, however the network needs to be adjusted for each pair separately, it also takes a lot of time and there are some subtleties...... There are actually a lot of subtleties .
 
AAAksakal:
Proving anything is not a worthwhile endeavour.
It's amazing that there are people on the market who are proud of it. I mean, you can show a tester run with a graph. Or is everything you've written just heat rubbish?
 
Yes, I forgot to add an important part. If you want to update your forecast, you will have to tear down the linear or net block and restart the processing. Otherwise you will get an update but with old coefficients, they will not be updated.When you tear down and start new blocks you will get fresh coefficients.
 
Goodbye, everyone.
 

This DA is quite weak.


Ran it on a simple classic recognition example:


Example strings:

1. Bird

2. Fly

3. Aeroplane

4. Glider

5. Non-winged rocket

The first six columns are inputs of recognisable objects. The rest of the columns are outputs.




A two-layer grid: 6 x 2 x 6 x 6


When tested with Back Propagation it's a real bummer, because 40% of training sample are linear separability, if the error is less than 0.01, then the training sample is considered recognized.


So, neither an aeroplane, glider or rocket were not recognized, all outputs have only negative values with any inputs. The bird and glider are recognized accurately enough. The output of differences between biological objects and mechanical objects was also recognized quite accurately.



When testing RPROP under the same conditions and the same architecture, the results are better:

So here the linear separability is already 100%, but errors are present.

Reason: