Difference calculus, examples. - page 7

 
Nikolai Semko:

Here is one possible implementation of this approach. No redrawing or shifting. This is the second derivative of your line.



Yes.

And I like it. )))

Thank you for your participation.

 
Aleksey Panfilov:

We'll see "as we go along.

Polynomials, splines, Gaussian processes...
Blue dots are training dots, red dots are test dots. Generate a bunch of any curves on blue ones, and test
with the metric you like on the red ones and pick the best one. You can randomly remove some of the blue ones...


 
Vizard_:

Polynomials, splines, Gaussian processes...
Blue dots are training dots, red dots are test dots. You generate a bunch of whatever curvatures you like on the blue ones, test
with the metric you like on the red ones and pick the best one. You can randomly remove some of the blues...



And so, and..., yes. Neural Networks is a very serious lure.

"They say no one has ever come back from there." :))))

 
 
Aleksey Panfilov:

And so, and..., yes. Neural Networks is a very serious lure.

"They say no one's ever come back from it." :))))

Except for wizards, of course. ))
The topic is so interesting that you don't even want to go back)).
 
Yuriy Asaulenko:
The topic is so interesting that I do not want to go back).

)))

From what I have seen about neural networks, it seems that the difference equations are present there quite widely, only in the explanations are written in a different form, apparently already adapted to the problems.

And it is logical, if we are talking about the analysis of discrete information.
 
Aleksey Panfilov:

)))

From what I have seen about neural networks, it seems that the difference equations are present there quite widely, only in the explanations are written in a different form, apparently already adapted to the problems.

I need to read the topic. I have not really understood what is the deal with RU.

I reread the topic. I think everything is clear, but I still do not understand what the idea is.

The analytical functions on the history can be drawn without stress, up to and including the 4th derivative, probably, by any methods. Polynomial regression does a pretty good job of approximation.

What is the advantage of RU?

 

Directly from difference equations for equidistant points we can make interpolation formulas in another way.

-3*Y3 =1*Y1-3*Y2-1*Y4

-6*Y3 =1*Y1-4*Y2-4*Y4+1*Y5

-10*Y3 =1*Y1-5*Y2-10*Y4+5*Y5-1*Y6

-15*Y3 =1*Y1-6*Y2-20*Y4+15*Y5-6*Y6 +1*Y7

-21*Y3 =1*Y1-7*Y2-35*Y4+35*Y5-21*Y6 +7*Y7-1*Y8

Taking as new information not the last value of the price, but its last increment (the first difference).

As a code:

      a1_Buffer[i]=(open[i]-3*open[i+1]   -1*a1_Buffer[i+1 ]  )/(-3);
      a2_Buffer[i]=(open[i]-4*open[i+1]   -4*a2_Buffer[i+1 ]   +1*a2_Buffer[i+2 ]  )/(-6);
      a3_Buffer[i]=(open[i]-5*open[i+1]   -10*a3_Buffer[i+1 ]  +5*a3_Buffer[i+2 ]  -1*a3_Buffer[i+3 ])/(-10);
      a4_Buffer[i]=(open[i]-6*open[i+1]   -20*a4_Buffer[i+1 ]  +15*a4_Buffer[i+2 ]  -6*a4_Buffer[i+3 ]  +1*a4_Buffer[i+4 ])/(-15);
      a5_Buffer[i]=(open[i]-7*open[i+1]   -35*a5_Buffer[i+1 ]  +35*a5_Buffer[i+2 ]  -21*a5_Buffer[i+3 ]  +7*a5_Buffer[i+4 ]  -1*a5_Buffer[i+5 ])/(-21);

The figure shows the beginning of the graph.

You can clearly see that this has allowed us to cope with auto-oscillations at a certain stage.

Of course, the subsequent differences can also be considered as new information.

However, already at the first difference it is not quite clear to me which algebraic line we are drawing. And as the "leverage" increases, things get more confusing there. ))))

 
Aleksey Panfilov:

And the lines constructed by polynomials of degree 5,6 (red, yellow) fall into something similar to resonance or auto-oscillation, and gradually accumulate the amplitude. Increasing leverage for polynomials of degree 5 and higher does not change the situation.

There matrix just already degenerates :) in such cases regularization is used, degrees are reduced
 

Alexey, tell me: how is your indicator with obscure epithets (polynomial, Newton's binomial, difference, interpolation) fundamentally different from an ordinary moving average? More precisely, from a simple moving average with a period of 72 to a moving average with the same period.

Your indicator is yellow.

The SMA from an SMA with a period of 72 is purple.



Files:
MaMa.mq4  7 kb
Reason: