Not Mashka's business! - page 5

 
Neutron:

That's what you get if the adder inputs dip not to the lagging mashups, but to itself :-) i.e. to the first derivative of the ideal MA.

The only thing left is to find an ideal MA. Try the real Djuric. Not ideal, but still close to it. And it's also quite smooth.

 
Neutron:

Oh, how interesting!

That's what you get, if you dip the adder inputs not to the lagging mashes, but to itself :-) i.e. to the first derivative of the ideal MA.

And why nobody asks to specify? :) Is it all clear to everyone?

There are no perfect mash-ups on the current bar. There could really be their predictions. Are we talking about them?

 
Neutron:

to Vinin

Give me your thoughts - there will be a sequel!


I've been a reader for a long time now, unfortunately. That's why I can't offer you anything. But the theme is really interesting. I beg your pardon.

 
lna01:
Neutron:

Oh, how interesting!

That's what you get, if you dip the adder inputs not to the lagged mashes, but to itself :-) i.e. to the first derivative of the ideal MA.

And why nobody asks to specify? :) Is it all clear to everyone?

There are no ideal Ma's on the current bar. There may indeed be their predictions. Are we talking about them?

No, I'm talking about the forecast for the width of the smoothing window for the perfect wave. Let me remind you that it's really wobbling on the right edge of BP with a typical smoothing zone just the width of that very window.


I did find a mistake in the code - I missed a line without correcting the index, as a result the forecast was built using weights counted for the backward window multiplied by the current value of ideal Ma. Here is the corrected result (see fig.). Weights are multiplied by the МАWA (I mean its derivative) a window earlier.


This is a forecast for 5 bars ahead. As one would expect, the forecast curve has successfully crumbled already at the start. Increasing the number of equations above 2 (I checked up to a hundred) does not give significant improvement.


P.S. I'm relieved!


to Vinin

I, unfortunately, became a reader long ago. That's why I can't offer you anything. And the subject is really interesting. I beg your pardon.

Well, how to load neuronics for this case. Isn't that weak?

Suddenly, the a priori nonlinearity of the NS born in at least two hidden layers will work a miracle...

 
Neutron:

to Vinin.

I've been a reader for a long time now, unfortunately. That's why I can't offer you anything. But the subject is really interesting. Pardon me.

Well, how to load neuronics for this affair. Isn't that weak?

What if the a priori nonlinearity of the NS, born in at least two hidden layers, could perform a miracle...

Of course it's possible to make a neuron. But it is not only a matter of neuronics. It is necessary to define the inputs, and I do not see it yet.

 

There is nothing wrong with entries. The main thing is to determine the confidence interval for trading.

Sometimes the system trades for six months and then abruptly loses money, but sometimes even earlier....

 
Neutron:

I did find a mistake in the code - I missed a line without correcting the index, as a result the forecast was based on the weights calculated for the window backwards, multiplied by the current value of the ideal MA. Here is the corrected result (see fig.). Weights are multiplied by the МАWA (I mean its derivative) one window earlier.


This is a forecast for 5 bars ahead. As one would expect, the forecast curve has successfully crumbled already at the start. Increasing the number of equations over 2 (I checked up to a hundred) does not give any significant improvement.


Seryoga, this is a very bad forecast, autocorr-i methods make a slightly better forecast. You will have huge mistakes when you go to BP

 
grasn:

Seryoga, this is a very bad forecast, autocorr-i methods make a slightly better forecast. You will have huge errors when you go to BP

If you are referring to linear autoregressive models of the form:

then I beg to differ. The point is that I'm solving almost the same problem (compare: x[n+1]=SUM{w[i]*x[n-i]}, where i=0...P-1), the only difference being that the weights under the sum sign are adaptively determined at the depth P, while in the classical form - integrally at a larger scale (for statistical set when calculating correlation coefficients). The fact that there is no result, so it only strengthens my desire to move to analysis by non-linear methods, especially using NS.

Concerning the case of perfect Mach extrapolation (you cited a graph), I think one can significantly increase the forecast horizon by n-order derivatives conservation from LPF, where n should be greater than 2. In my case only the first derivative was stored, that is why when the horizon was increased beyond 2-3 bars the series began to crumble.

 
Neutron:
grasn:

Seryoga, this is a very bad forecast, autocorr-i methods make a slightly better forecast. You will have huge errors when you go to BP

If you are referring to linear autoregressive models of the form:

then I beg to differ. The point is that I'm solving almost the same problem (compare: x[n+1]=SUM{w[i]*x[n-i]}, where i=0...P-1), the only difference being that the weights under the sum sign are adaptively determined at the depth P, while in the classical form - integrally at a larger scale (for statistical set when calculating correlation coefficients). The fact that there is no result, so it only strengthens my desire to move to analysis by non-linear methods, especially using NS.

Concerning the case of perfect Mach extrapolation (you cited a graph), I think one can significantly increase the forecast horizon by n-order derivatives conservation from LPF, where n should be greater than 2. In my case only the first derivative is conserved, that is why when the horizon gets higher than 2-3 bars the series starts falling apart.




Serega, and where only the adder with coefficients is not used. So, one can argue that you have a neural network, albeit a small one. Let's compare your model and mine, we only need to think of the criteria. I'll use predict() in MatCAD, and you use your system. We have the same development environment, so let's define the data file (quote, process under test - close, average or whatever ..., testing area). We test only MA forecast, the MA itself is selected adaptively - it doesn't matter how, only the final result is important. We test it on every sample, thereby increasing the statistical validity (there seems to be enough data)


But my forecast horizon is adaptively selected and takes values within the previously specified limits. Here's an example of my MA forecast for four readings ahead:


[no errors].


Shall we try to compare? And if so, what are your suggestions for criteria, preferably it should be one figure corresponding to one count, so I think it will be easier to compare.


PS: Let's set the time for the test not too rigid, I think you have a lot of different things to do too.

PS2: For the test, you can exchange files by mail, or you can take your word for it :o)

 

Ok!

Did I get it right that we both print one file relatively to the smooth curve (MA) and make a forecast for N readings ahead? If so, we may evaluate the result in the following way: gather statistics of predictions (1000 results) for 10 reads ahead (for instance) and build a forecast field in Cartesian coordinates, putting MA true value on abscissa axis and prediction on ordinate axis.


On the obtained cloud by method of least squares we draw a straight line and that method with the tangent of the slope of this line will be closer to 1 - steeper!


P.S. And about the small neuron you hit the bull's-eye as usual :-)