Testing real-time forecasting systems - page 30

 

here's mine, still the same prediction :o) and fact(third day of testing)


 
grasn >> :

here's mine, still the same prediction :o) and fact (third day of testing)




It looks like you have two superimposed predictions shifted in time. The second prediction is more accurate. Could you explain the difference between the two predictions and why they are time-shifted.

 

Combined our pictures for clarity


Looks like :-)


Colleagues, if it's no secret, how do you choose the forecast distance followed by the trajectory recalculation?

 
gpwr >> :

Looks like you have two superimposed predictions shifted in time. The second prediction is more accurate. Could you explain the difference between the two predictions and why they are time shifted.

is the same prediction. Gave as far back as page 25. On 29 neoclassic clarified the lines. Here is this text:

You can see it better here. And the MT screenshots are my virtuoso level of MQL (between you and me I think the prefs are biting their elbows with envy). The dark grey boundary is 1 RMS built from the MO process. By and large, it's rare for a process implementation to fit within such a "narrow" boundary. The worm inside, is the refinement of the process.

These are not trajectories as such in the sense of a prediction, they are areas where increased concentration of the quotation process is to be expected. What is pictured is a first approximation, the process is iterative. Furthermore, alternative realisations are considered.


It's the same forecast, the inner loop refines the price, but it's shifted by 24 counts and that was from the beginning :o). It's some sort of resolution-related feature of the forecast, it's just that the first 24 counts the price will be concentrated between the dark grey borders.


PS: I hope I've made myself clear, it just bothers me that I have to repeat myself :o)

 
neoclassic >> :

Combined our pictures for clarity


Looks like :-)


Colleagues, answer if it's no secret, how do you choose the prediction distance followed by the recalculation of the trajectory?


I haven't counted it at all yet. That's the point :o)

 
grasn >> :

is one prediction. Gave as far back as page 25. On 29 neoclassic clarified the lines. Here is this prediction:


It's the same prediction, the inner outline clarifies the price, but it's shifted by 24 counts and that was from the beginning :o). This is some sort of resolution related feature of the forecast, it's just that the first 24 counts the price will concentrate between the dark grey boundaries.


PS: I hope I've made myself clear, it just bothers me that I have to repeat myself :o)

Thank you. Understood. The accuracy of your predictions is pretty good. Is it always like this?

 
grasn >> :

I haven't counted it at all yet. That's the thing :o)

My GRNN indicator is fast and recalculates on every bar. Actually I'm in favour of recalculating on every bar. If the forecasting system is really accurate, then recalculation on every bar will not hurt, on the contrary, it helps to improve accuracy of predictions. If calculations are slow or require saving data to a file and running matcad or matlab to process it, it is difficult to recalculate on every bar. By the way, grasn, where do you do your calculations? It looks like matcad.

 

to gpwr

И всегда так?

I have a few models in the works at the moment. I haven't gathered full statistics on this one, it's quite computationally expensive, but "spot testing", on relatively large areas (but not on full history) shows good results. I am going to "add" NS (some ideas have appeared), and thoroughly test on history, but this time evaluating by deals, but not by errors. Logically, the probability nets will suit my statistics, but which ones - I will need to look more closely.

I'm actually in favour of recalculating on every bar

I'm a proponent - "control" on every bar. Worse if the forecast starts to "jump". I have statistics in the full sense of the word, and it was surprising to expect strong variability. On the other hand, the method produces several alternative price concentration zones, and I want to give their detailed evaluation and decision making to NS because I cannot make the model automatic.

If the forecast system is really accurate, then recalculation on every bar won't make any harm, on the contrary, it will improve the accuracy of the forecasts

My understanding, knowledge and experience shows the following, for example, it is useless to make forecasts for one or several bars in Forex using any models (AR, ARIMA, FARIMA, .... neural networks, and any, all TA methods). Simply useless. In the first 1-5 samples (depending on the TF) the price changes so catastrophically fast and at its maximum values and so according to incomprehensible laws that it is very difficult to catch such movements. Using different "methods of moments estimation" to identify model parameters will also give no positive results. The only possibility is to estimate by likelihood method, but it is complicated and there are not always solutions. Fractal analysis (the normal mathematical fractal analysis and not this nonsense of "fractals" in technical analysis) has shown that there is a rather "narrow" area where you can "hear" the market.)

If calculations are slow or if data need to be saved to a file and run Matcad or Matlab to process them, then it is difficult to recalculate on every bar

This is not a problem, it can be solved in two ways:

  • Use VisSIM as an integration platform
  • Transfer the model to MT, which is what I intend to do in the future

Where do you do your calculations? Sounds like matcad.

Exactly MathCAD, and VisSIM is integrated with it

 

Hi all, I'll show you my result (I'm at another terminal now, I had to overlay screenshots, sorry for the quality)


(as it was "before")

 

Dear gpwr, what parameters do you set for your indicator? Thank you.

Reason: