From theory to practice - page 521

 

What's wrong with the dashboard?

It shows the average value of the graph. What would be the average value of a graph that rose from mark 0 to mark 10?
5.
So the scale will now be at 5. Even though the graph is already at 10.


A linear regression will help.
The line of a linear regression will go through the centre of the price channel, and the last point of this line will be at 10.



But channels are not always straight. Some channels are arc-shaped. In this case the LR will not work well.


This is where polynomial regression will do the trick.
The OR plot will go straight down the centre of the channel and its last point will also be in the centre of the channel.


And here's how the PR will go along such an arc (if it's exactly a slice of a sine wave)


you have to consider the disadvantages as well...

 
Smokchi Struck:

What's wrong with the dash?

It shows the average value of the graph. What would be the average value of a graph that rose from mark 0 to mark 10?
5.
So the scale will now be at 5. Even though the graph is already at 10.


A linear regression will help.
The line of a linear regression will go through the centre of the price channel, and the last point of this line will be at 10.



But channels are not always straight. Some channels are arc-shaped. In this case the LR will not work well.


This is where polynomial regression will do the trick.
The PR graph will go straight down the centre of the channel and its last point will also be in the centre of the channel.


Here, another arc. (i.e. a slice of the sine wave).


we have to consider the disadvantages as well...

 
Maxim Dmitrievsky:

But the basis is the same - dependency reconstruction by introducing mappings into new dimensions using kernel transformations, and there's more on top of that

By the way, Victor has an article on kernel estimatesof probability densities, here
https://www.mql5.com/ru/articles/396
 
Maxim Dmitrievsky:
defect?)))

even better this way:


figure out how to solve it.
 

Maxim Dmitrievsky:


What about forecasting? ))

 
Smokchi Struck:
defect?)))

figure out how to solve it.

in the current dimension - there's no way to solve it, that's the problem. Your regression knows nothing about the possibility of such an event occurring

 
Novaja:
By the way, Victor has an article about kernel probability densities, here
https://www.mql5.com/ru/articles/396

I don't think it's about kernel tricks - mapping feature vectors to other spaces

at least not in my opinion...haven't read it.

 
Maxim Dmitrievsky:

in the current dimension - there's no way to solve it, that's the problem. Your regression knows nothing about the possibility of such an event occurring

Well, that's why we're here, to solve problems))
 
Smokchi Struck:
drawback?)))

even better like this:


figure out how to solve it.

usually like this.


 
Maxim Dmitrievsky:

in the current dimension - there's no way to solve it, that's the problem. Your regression knows nothing about the possibility of such an event occurring

You can show the graph as one-point drops. The graph went up by 1 point - draw up a one-point segment, the graph went down by 1 point - draw down a one-point segment. then there will be a lot of points.

Reason: