Machine learning in trading: theory, models, practice and algo-trading - page 1555

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The idea with mixing is interesting, but it seems to me that it is necessary to randomize the price movement from one key point to another. And the blocks themselves should be created using ZZ, then it will really look like a market.
Then the model will capture the regularities that lead to the appearance of such extrema. And the new data may lead to nonsense.
My model learns not to be tied to the shape of the price movement, but to study small patterns like volatility clustering, which distinguishes the market from the SB. So it's pureeconometrics(in my version)
I was just optimizing, and realized that my laptop can`t handle it anymore. I've got to get some decent hardware. But it will push me to non-optimal code, so I'll see what I can do.
The second option is to throw out catbust and rewrite everything in the forest in mql. But it is more convenient to investigate in pythonthen the model will capture the patterns that lead to these extremes. And new data may result in nonsense.
My model learns not to be tied to the shape of the price movement, but to study small patterns like volatility clustering, which distinguishes the market from the SB. So it's pure econometrics(in my version)
I was just optimizing, and realized that my laptop can`t handle it anymore. I've got to get some decent hardware. But it will lead to suboptimal code, so I'll see what I can do.
I do not know, in my opinion it is productive to slice clusters by ZZ, especially if you add to them the average rules of construction from the market. The point is that one point can be arrived at by different paths, and the sample only focuses on a small set of such paths, and this way the sample will be balanced. Perhaps we have different targets, so we think differently about what would work best for a particular study. You have clusters of only the same size rule, which just generates SB if the predictors take data at the junctions of the two clusters...
And the iron - yes, take it, if it speeds up the flight of fancy!
I do not know, in my opinion just slicing clusters by ZZ is productive, especially if you add to them the average rules of construction from the market. The point is that you can come to one point by different paths, and the sample only focuses on a small set of such paths, and this way the sample will be balanced. Perhaps we have different targets, so we think differently about what would work best for a particular study. You have clusters of only the same size rule, which just generates SB if the predictors take data at the junctions of the two clusters...
And iron - yes, take it if it speeds up the flight of fancy!
ah, well, you can make clusters of different sizes too, not sure it will save
I think the idea of mixing is flawed, but it's interesting.
ah, well, clusters of different sizes can be made too, not that it will save
I think the idea of mixing is flawed, but it's interesting.
Random sampling or in your words shuffling (if I understand you correctly) is one of the ways to reduce overtraining. I don't know if it's useful in regression, but it works great in classification.
yes, but when there are no regularities in returns, it's a dead man's work)
The idea with mixing is interesting, but it seems to me that it is necessary to randomize the price movement from one key point to another. And the blocks themselves should be created using ZZ, then it will really look like a market.
Do not use ZZ or any additional indicators. Only OHLC from several timeframes (timeframe should differ 4-6 times. For example, 1-5-30-H3... up to a monthly timeframe. Select it independently) and, perhaps, more ticks for early warning.
By the prices of maximums and minimums separately convolutional structures. By OHLC - the recurrence structure. And so on for all the used prices. All signals of that are directed, for example, to the full mesh network.
Also, the ticks passed through the recurrence network are fed to one of the inputs of the full mesh network.
Optimize by speed of deposit increase. As a result, the cumulative network should decide by itself, what is the lot volume and select the points of opening and closing. Like this.
Do not use ZZ or any additional indicators. Only OHLC from several tf (tf should differ 4-6 times. For example, 1-5-30-H3... up to a monthly timeframe. Select it independently) and, perhaps, more ticks for early warning.
By the prices of maximums and minimums separately convolutional structures. By OHLC - the recurrence structure. And so on for all the used prices. All signals of that are directed, for example, to the full mesh network.
Also, the ticks passed through the recurrence network are fed to one of the inputs of the full mesh network.
Optimize by speed of deposit increase. As a result, the cumulative network should decide by itself, what is the lot volume and select the points of opening and closing. Approximately like this.
Do not use ZZ or any additional indicators. Only OHLC from several tf (tf should differ 4-6 times. For example, 1-5-30-H3... up to a monthly timeframe. Select it independently) and, perhaps, more ticks for early warning.
By the prices of maximums and minimums separately convolutional structures. By OHLC - the recurrence structure. And so on for all the used prices. All signals of that are directed, for example, to the full mesh network.
Also, the ticks passed through the recurrence network are fed to one of the inputs of the full mesh network.
Optimize by speed of deposit increase. As a result, the cumulative network should decide by itself, what is the lot volume and select the points of opening and closing. Approximately like this.
And there is a bow on top )
What do you suggest as a target function for intermediate nets? That is, what should we teach them?
By the rate of increase of the deposit. This is the target function. The convolution of maxima and convolution of minima is some analogue of ZZ. It reveals wave fractals. Recurrence structures according to OHLC: Here we catch the candlestick combinations - candlestick patterns (fractals).
Grids on the data from different prices reveal the fractals on different prices. The target function of the deposit increase speed sets the amount of fractals appearing on differentfolds to be taken into account.
and a bow on top of it )
This is for the amateur.
From the rate of increase of the deposit. This is the target function.
What does the deposit consist of? From the buy/sell/wait commands.
These commands will be trained by the final NS. And then predict them.
What should the intermediate networks be trained on? ZigZags? To teach a network something, it needs to show an answer. What zigzag algorithm and with what parameters do you propose to use as a training signal?