Machine learning in trading: theory, models, practice and algo-trading - page 1626

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
It seems to me that probability is probability in both cases...
Uncertainty is different, but probability (where there is one) is always probability.)
Game theory usually tries to reduce game uncertainty to probabilistic uncertainty. For example, through Nash equilibrium in mixed strategies.
For markets, the main problem in converting to probabilistic models is the essential non-stationarity of the resulting models.
Indeed, there are a lot of crazy people, ten thousand is just the beginning, some woodpeckers are trying to poke a million dots! There are also tics and glasses...
Uncertainty is different, but probability (where it exists) is always probability.)
Game theory usually tries to reduce game uncertainty to probabilistic uncertainty. For example, through Nash equilibrium in mixed strategies.
For markets, the main problem with switching to probabilistic models is the significant non-stationarity of the resulting models.
All because the representation of information is not correct.
A non-stationary process is treated like a stationary process measured with a ruler and centimeters.
First of all it's necessary to transform fractal structures into one dimension (non-stationary to transform to statsionary), then to find patterns/patterns and then statistics/probability.
That's what I mean. Many people forget one very important fundamental rule. If a problem can be solved without the help of Navigator, it can be solved. From this rule follows that we saw data mathematically as long as mathematics allows us to do so, as soon as mathematics becomes powerless then we use NS. In other words, the input data should be filtered, smoothed, normalized, etc. MATHEMATIC as much as allowed by mathematics in principle, and only then begin to apply NS. And not so that we stuff in NS junk and wait by the sea of weather. That's not how it works. For example, using 50 input vectors I rejected all unnecessary market chunks that would be noise in the learning and using this number of inputs I obtain the model of 90-95% learning quality, and in this case it takes 2 months on М5. What would happen if I feed all of this part to the network? Nothing. I would have obtained a model of much worse quality, not suitable for use. And the section was as two months in the first as in the second case, and remained so. But the result is different.
Cool, what can I say, I have not come out more akurasi 55%, if you predict the future direction, without an admixture of the past. In general it is better to count not acurasi, but correlation with future returnee, this number is proportional to Sharpe Ratio, which will be obtained (depending on the trade costs, of course). A correlation of 3% is sufficient for annual SR ~1 -1.5
Cool, what can I say, I have not come out more akurasi 55%, if you predict the future direction, without an admixture of the past. In general it is better to count not acurasi, but correlation with future returnee, this number is proportional to Sharpe Ratio, which will be obtained (depending on the trade costs, of course). Correlation of 3% is enough for annual SR ~1 -1.5
Try to filter input mathematically and thus reduce training sample, without time period reduction and quality of NS will increase I'm sure. Thus you get rid of unnecessary noise, which is stupidly eliminated by mathematics. Again I wrote values of generalizing ability given by optimizer. Ie the time period is the same, but the quality of the resulting network will be better, which will affect the feedback loop. IMHO
Excuse me?
all because the representation of the information is not correct.
The non-stationary process is treated like a stationary process measured with a "ruler-cantimeter sea wave".
First of all it is necessary to transform fractal structures into one dimension (to transform non-stationary into statsionary), then to find patterns/patterns, and then to find statistics/probability.
In case of essential non-stationarity it is more correct to speak about multifractality as fractal characteristics change with time. These changes are as unpredictable as any other.
Excuse me?
Well, let's say not all minutes, but those with a body greater than N-items as an example. Thus you will reduce the amount of data, but not the sampling interval. And the grid will thank you.
Since we have mentioned this topic in general, I will say that recently I am helping the grid work through a preprocessing. Say, I first optimize the parameters of the Sequence itself, I obtain in principle a typing Sequence. And then I ask the network to make the Sequenta, which is already typing, typing better. I mean a team game. Half the Sequenta is gaining, and the other half is netting, helping the gaining Sequenta to score better. So to say I don't rape the NS, but only ask her to help a bit, and this bit is enough.
All this will be covered in the video...
Well, let's say not all minutes, but those with a body greater than N-items as an example. Thus you will reduce the amount of data BUT not the sampling time interval. And the grid will thank you.
Since we have mentioned this topic in general, I will say that recently I am helping the grid work through a preprocessing. Say, I first optimize the parameters of the Sequence itself, I obtain in principle a typing Sequence. And then I ask the network to make the Sequenta, which is already typing, typing better. I mean a team game. Half the Sequenta is gaining, and the other half is netting to help the gaining Sequenta score better. So to say, I do not rape the NS, but only ask her to help a little bit, and this little bit is enough.
All this will be shown in the video...