Machine learning in trading: theory, models, practice and algo-trading - page 1626

 
mytarmailS:

It seems to me that probability is probability in both cases...

Uncertainty is different, but probability (where there is one) is always probability.)

Game theory usually tries to reduce game uncertainty to probabilistic uncertainty. For example, through Nash equilibrium in mixed strategies.

For markets, the main problem in converting to probabilistic models is the essential non-stationarity of the resulting models.

 
Kesha Rutov:

Indeed, there are a lot of crazy people, ten thousand is just the beginning, some woodpeckers are trying to poke a million dots! There are also tics and glasses...

That's what I mean, too. Many people forget one very important fundamental rule. If the problem can be solved without the help of NS, it should be done. From this rule it follows that we saw data mathematically as long as mathematics allows us to do it, as soon as mathematics becomes powerless then we use NS. In other words, the input data should be filtered, smoothed, normalized, etc. MATHEMATIC as much as allowed by mathematics in principle, and only then begin to apply NS. And not so that we stuff in NS junk and wait by the sea of weather. That's not how it works. For example, using 50 input vectors I rejected all unnecessary market chunks that would be noise in the learning and using this number of inputs I obtain the model of 90-95% learning quality, and in this case it takes two months on M5. What would happen if I feed all of this part to the network? Nothing. I would have obtained a model of much worse quality, not suitable for use. And the section was as two months in the first as in the second case, and remained so. But the result is different.
 
Aleksey Nikolayev:

Uncertainty is different, but probability (where it exists) is always probability.)

Game theory usually tries to reduce game uncertainty to probabilistic uncertainty. For example, through Nash equilibrium in mixed strategies.

For markets, the main problem with switching to probabilistic models is the significant non-stationarity of the resulting models.

All because the representation of information is not correct.

A non-stationary process is treated like a stationary process measured with a ruler and centimeters.

First of all it's necessary to transform fractal structures into one dimension (non-stationary to transform to statsionary), then to find patterns/patterns and then statistics/probability.

 
Mihail Marchukajtes:
That's what I mean. Many people forget one very important fundamental rule. If a problem can be solved without the help of Navigator, it can be solved. From this rule follows that we saw data mathematically as long as mathematics allows us to do so, as soon as mathematics becomes powerless then we use NS. In other words, the input data should be filtered, smoothed, normalized, etc. MATHEMATIC as much as allowed by mathematics in principle, and only then begin to apply NS. And not so that we stuff in NS junk and wait by the sea of weather. That's not how it works. For example, using 50 input vectors I rejected all unnecessary market chunks that would be noise in the learning and using this number of inputs I obtain the model of 90-95% learning quality, and in this case it takes 2 months on М5. What would happen if I feed all of this part to the network? Nothing. I would have obtained a model of much worse quality, not suitable for use. And the section was as two months in the first as in the second case, and remained so. But the result is different.

Cool, what can I say, I have not come out more akurasi 55%, if you predict the future direction, without an admixture of the past. In general it is better to count not acurasi, but correlation with future returnee, this number is proportional to Sharpe Ratio, which will be obtained (depending on the trade costs, of course). A correlation of 3% is sufficient for annual SR ~1 -1.5

 
Kesha Rutov:

Cool, what can I say, I have not come out more akurasi 55%, if you predict the future direction, without an admixture of the past. In general it is better to count not acurasi, but correlation with future returnee, this number is proportional to Sharpe Ratio, which will be obtained (depending on the trade costs, of course). Correlation of 3% is enough for annual SR ~1 -1.5

Try to filter the input mathematically and thus reduce the training sample, without reducing the time period and the quality of NS will increase I'm sure. Thereby getting rid of unnecessary noise, which is stupidly sifted out by mathematics. Again I wrote values of generalizing ability given by optimizer. Ie the time period is the same, but the quality of the resulting network will be better, which will affect the feedback loop. IMHO
 
Mihail Marchukajtes:
Try to filter input mathematically and thus reduce training sample, without time period reduction and quality of NS will increase I'm sure. Thus you get rid of unnecessary noise, which is stupidly eliminated by mathematics. Again I wrote values of generalizing ability given by optimizer. Ie the time period is the same, but the quality of the resulting network will be better, which will affect the feedback loop. IMHO

Excuse me?

 
mytarmailS:

all because the representation of the information is not correct.

The non-stationary process is treated like a stationary process measured with a "ruler-cantimeter sea wave".

First of all it is necessary to transform fractal structures into one dimension (to transform non-stationary into statsionary), then to find patterns/patterns, and then to find statistics/probability.

In case of essential non-stationarity it is more correct to speak about multifractality as fractal characteristics change with time. These changes are as unpredictable as any other.

 
Kesha Rutov:

Excuse me?

Well, let's say not all minutes, but those with a body greater than N-items as an example. Thus you will reduce the amount of data, but not the sampling interval. And the grid will thank you.

Since we have mentioned this topic in general, I will say that recently I am helping the grid work through a preprocessing. Say, I first optimize the parameters of the Sequence itself, I obtain in principle a typing Sequence. And then I ask the network to make the Sequenta, which is already typing, typing better. I mean a team game. Half the Sequenta is gaining, and the other half is netting, helping the gaining Sequenta to score better. So to say I don't rape the NS, but only ask her to help a bit, and this bit is enough.

All this will be covered in the video...

 
Mihail Marchukajtes:

Well, let's say not all minutes, but those with a body greater than N-items as an example. Thus you will reduce the amount of data BUT not the sampling time interval. And the grid will thank you.

Since we have mentioned this topic in general, I will say that recently I am helping the grid work through a preprocessing. Say, I first optimize the parameters of the Sequence itself, I obtain in principle a typing Sequence. And then I ask the network to make the Sequenta, which is already typing, typing better. I mean a team game. Half the Sequenta is gaining, and the other half is netting to help the gaining Sequenta score better. So to say, I do not rape the NS, but only ask her to help a little bit, and this little bit is enough.

All this will be shown in the video...

In your case NS analyzes only those minutes which satisfy the conditions. Thus, we do not drag the potential of the network to the real trash, which can be eliminated mathematically, which is actually stated in the above-mentioned condition. Which is better to be taken as a LAW.
 
And Enokenty, I demand from you a public apology for the fact that you equated me, such a worthless programmer and handyman to an outstanding personality such as Reshetov Yuri. If you saw his code and the manner of his writing, you would have admired it as I admire the manner of its programming. Yes, I made adjustments to the optimizer, which, to me, have improved the final outcome, but to compare it to me is stupid. Compared to him, I'm just a prep school student who always skipped classes and is always shouting from the back of the school "What? So I'm waiting for an apology.
Reason: