Machine learning in trading: theory, models, practice and algo-trading - page 1146

 
Aleksey Nikolayev:

You overestimate me) I have not yet advanced beyond the introduction.)

As they themselves write - it's a certain subclass of nature (environment) games. I'm sure almost all of our models lie within the nature game, but I don't know how suitable these "bandits" are.

I like latent Markov processes better. There, the non-stationarity can be a consequence of the fact that we are not observing all the variables. Roughly speaking, a process that is non-stationary for us will be a derivative of a stationary process, but known only to the market maker.

I can send you the code, but I'm not sure that someone will understand it and offer something new :)

 
Grail:

I get it, bullshit is bullshit))

This is not about peeping, although it may also be under certain conditions, but the fact that the OOS should be as close to the real, because you want the result of OOS was repeated + - on the real, and if you test it on the more distant past, it will be close to the past, and the market in this time may have changed more or less. Your method can lead completely to absurdity, for example if you separate OOS and real for years)))

You yourself write absurd, because the market can change both in the past and in the future, relative to the lern. Moreover, the closer the lern is to the present, the less likely the market will change tomorrow. And I'm just looking at how much the algorithm is able to generalize on any OOS.

You just had someone tell you it should be this way, and you don't really know why, just speculation

 
Maxim Dmitrievsky:

there is absolutely no difference

the flag in your hands, the real will put everything in its place

 
Maxim Dmitrievsky:

I can send you the code, but I'm not sure that someone will understand it and offer something new :)

Sometimes it's hard to understand your own code, which I haven't worked with for a month)

 
TheXpert:

Reality will be everything.

Jesus Christ, we're discussing gangsters here.

 
Maxim Dmitrievsky:

This is not about peeping, although it may also be under certain conditions, but that OOS should be as close to the real, because you want the result of OOS repeated + - on the real, and if you test for the more distant past, it will be close to the past, and the market during this time and may change more or less. Your method can lead completely to absurdity, for example if you separate OOS and real for years)))

You yourself write absurd, because the market can change both in the past and in the future, relative to the lern. Moreover, the closer the lern is to the present, the less likely the market will change tomorrow. I'm just looking at how the algorithm is able to generalize at any OOS.

In general, the essence of algotrading is that the market changes at least partially continuously, there is a kind of "inertia", due to the diffusion of information. That is, what was yesterday, more likely will be today than what was a month (year) ago, you optimize for OOS to be closer to the real, then you simply retrain the OOS data, what is the problem? Usually everyone does so, first divide into Lern and Train learn on Lern test on Train and then retrain on Lern + test on optimized parameter configuration.


Of course I won't dispute and insist, my colleague correctly said "real life will put everything in its place", market lessons are better remembered than forum demagoguery))

 
Grail:

In general, the essence of algotrading is that the market is changing at least in part continuously, that is, a kind of inertia, as a result of the diffusion of information. That is, what was yesterday is more likely to be today than a month (a year) ago, you optimize for OOS to be closer to the real, then you simply retrain the OOS data, what is the problem? Usually that's what everyone does, first divide into lurn and treyn learn on lurn check on treyn and then retrain on lurn + treyn on optimized configuration.

so what difference does it make which side is ops? ))

Especially when you consider that lorne and traine are the same thing (I understand what Test meant, but the highlighted doesn't negate it)

 
Maxim Dmitrievsky:

So what difference does it make which side is the OOS? ))

Especially when you consider that Lern and Train are one and the same (I understand what you mean Test, but the highlighted does not cancel it)

typo, thanks, fixed it

there is a big difference, OOS is closer to reality, you should optimize for what is closer to real life, and not for who knows what and when

 
Grail:

typo, thanks, corrected.

the difference is big, OOS is closer to the real, you need to optimize for what is closer to the real, and not for who knows what and when

the task is to make the two parts indistinguishable (the same errors, etc.), in this context, the definition of what wiggles at all loses all meaning

 
Maxim Dmitrievsky:

God, what is real, we are discussing bandits here

So what have you implemented "Bandit algorithm" in RDF?

Or have you coded anything specifically for the "Bandit" algorithm?

Reason: