Machine learning in trading: theory, models, practice and algo-trading - page 1555

 
Aleksey Vyazmikin:

The idea with mixing is interesting, but it seems to me that it is necessary to randomize the price movement from one key point to another. And the blocks themselves should be created using ZZ, then it will really look like a market.

Then the model will capture the regularities that lead to the appearance of such extrema. And the new data may lead to nonsense.

My model learns not to be tied to the shape of the price movement, but to study small patterns like volatility clustering, which distinguishes the market from the SB. So it's pureeconometrics(in my version)

I was just optimizing, and realized that my laptop can`t handle it anymore. I've got to get some decent hardware. But it will push me to non-optimal code, so I'll see what I can do.

The second option is to throw out catbust and rewrite everything in the forest in mql. But it is more convenient to investigate in python
 
Maxim Dmitrievsky:

then the model will capture the patterns that lead to these extremes. And new data may result in nonsense.

My model learns not to be tied to the shape of the price movement, but to study small patterns like volatility clustering, which distinguishes the market from the SB. So it's pure econometrics(in my version)

I was just optimizing, and realized that my laptop can`t handle it anymore. I've got to get some decent hardware. But it will lead to suboptimal code, so I'll see what I can do.

I do not know, in my opinion it is productive to slice clusters by ZZ, especially if you add to them the average rules of construction from the market. The point is that one point can be arrived at by different paths, and the sample only focuses on a small set of such paths, and this way the sample will be balanced. Perhaps we have different targets, so we think differently about what would work best for a particular study. You have clusters of only the same size rule, which just generates SB if the predictors take data at the junctions of the two clusters...

And the iron - yes, take it, if it speeds up the flight of fancy!

 
Aleksey Vyazmikin:

I do not know, in my opinion just slicing clusters by ZZ is productive, especially if you add to them the average rules of construction from the market. The point is that you can come to one point by different paths, and the sample only focuses on a small set of such paths, and this way the sample will be balanced. Perhaps we have different targets, so we think differently about what would work best for a particular study. You have clusters of only the same size rule, which just generates SB if the predictors take data at the junctions of the two clusters...

And iron - yes, take it if it speeds up the flight of fancy!

ah, well, you can make clusters of different sizes too, not sure it will save

I think the idea of mixing is flawed, but it's interesting.

 
Maxim Dmitrievsky:

ah, well, clusters of different sizes can be made too, not that it will save

I think the idea of mixing is flawed, but it's interesting.

Random sampling or in your words shuffling (if I understand you correctly) is one of the ways to reduce overtraining. I don't know how appropriate it is for regression, but it works great in classification.
It seems to me that right now there is a certain confrontation between algorithms that bring the function closer to the target and algorithms that prevent this. Some kind of resistance arises in the learning process. So that life should not seem like a dream :-)
If there is no resistance, then the learning is approaching too quickly to the target one, so that overtraining threshold is very probable.
If there is resistance, but weak, the effect is the same.
If the resistance is too strong, then there is underlearning and the model cannot get to the confidence interval where the generalization zone lies, showing poor results of learning itself. Also not good.
Conclusion one, resistance to learning. Or methods aimed at reducing overlearning should be balanced against the basic algorithm, so fuknts should come within the confidence interval with enviable consistency, but by no means go beyond it or do it very rarely.
 
Mihail Marchukajtes:
Random sampling or in your words shuffling (if I understand you correctly) is one of the ways to reduce overtraining. I don't know if it's useful in regression, but it works great in classification.
It seems to me that right now there is a kind of opposition between the algorithms that bring the function closer to the target and the algorithms that prevent it. Some kind of resistance arises in the process of learning. To make life easier for you :-)
If there is no resistance, then the learning is approaching too quickly to the target, so overshooting the over-learning threshold is very likely.
If there is resistance, but weak, the effect is the same.
If the resistance is too strong, then there is underlearning and the model cannot get to the confidence interval where the generalization zone lies, showing poor results of learning itself. Also not good.
Conclusion one, resistance to learning. Or methods aimed at reducing overlearning should be balanced with respect to the basic algorithm, so that the fukntsiya with enviable consistency came to the confidence interval, but in any case did not jump through it or did it very rarely.

yes, but when there are no regularities in returns, it's a dead man's work)

 
Aleksey Vyazmikin:

The idea with mixing is interesting, but it seems to me that it is necessary to randomize the price movement from one key point to another. And the blocks themselves should be created using ZZ, then it will really look like a market.

Do not use ZZ or any additional indicators. Only OHLC from several timeframes (timeframe should differ 4-6 times. For example, 1-5-30-H3... up to a monthly timeframe. Select it independently) and, perhaps, more ticks for early warning.

By the prices of maximums and minimums separately convolutional structures. By OHLC - the recurrence structure. And so on for all the used prices. All signals of that are directed, for example, to the full mesh network.

Also, the ticks passed through the recurrence network are fed to one of the inputs of the full mesh network.

Optimize by speed of deposit increase. As a result, the cumulative network should decide by itself, what is the lot volume and select the points of opening and closing. Like this.

 
Eugeni Neumoin:

Do not use ZZ or any additional indicators. Only OHLC from several tf (tf should differ 4-6 times. For example, 1-5-30-H3... up to a monthly timeframe. Select it independently) and, perhaps, more ticks for early warning.

By the prices of maximums and minimums separately convolutional structures. By OHLC - the recurrence structure. And so on for all the used prices. All signals of that are directed, for example, to the full mesh network.

Also, the ticks passed through the recurrence network are fed to one of the inputs of the full mesh network.

Optimize by speed of deposit increase. As a result, the cumulative network should decide by itself, what is the lot volume and select the points of opening and closing. Approximately like this.

What do you propose as the target function for intermediate networks? I.e., what to teach them?
 
Eugeni Neumoin:

Do not use ZZ or any additional indicators. Only OHLC from several tf (tf should differ 4-6 times. For example, 1-5-30-H3... up to a monthly timeframe. Select it independently) and, perhaps, more ticks for early warning.

By the prices of maximums and minimums separately convolutional structures. By OHLC - the recurrence structure. And so on for all the used prices. All signals of that are directed, for example, to the full mesh network.

Also, the ticks passed through the recurrence network are fed to one of the inputs of the full mesh network.

Optimize by speed of deposit increase. As a result, the cumulative network should decide by itself, what is the lot volume and select the points of opening and closing. Approximately like this.

And there is a bow on top )

 
Elibrarius:
What do you suggest as a target function for intermediate nets? That is, what should we teach them?

By the rate of increase of the deposit. This is the target function. The convolution of maxima and convolution of minima is some analogue of ZZ. It reveals wave fractals. Recurrence structures according to OHLC: Here we catch the candlestick combinations - candlestick patterns (fractals).

Grids on the data from different prices reveal the fractals on different prices. The target function of the deposit increase speed sets the amount of fractals appearing on differentfolds to be taken into account.

Maxim Dmitrievsky:

and a bow on top of it )

This is for the amateur.


 
Eugeni Neumoin:

From the rate of increase of the deposit. This is the target function.

What does the deposit consist of? From the buy/sell/wait commands.

These commands will be trained by the final NS. And then predict them.
What should the intermediate networks be trained on? ZigZags? To teach a network something, it needs to show an answer. What zigzag algorithm and with what parameters do you propose to use as a training signal?

Reason: