Machine learning in trading: theory, models, practice and algo-trading - page 1335

 
Aleksey Vyazmikin:
Maxim, I swapped the sample places - for training and validation, the test left - what will be the result based on scientific dogma? I myself do not know yet, the processing is not yet complete.

If your data and the model are adequate, then, in theory, the result should worsen.

 

I, however, do not understand something in your MO. I get the impression that you simply give the MO a set of data-predictors, etc., and say, "Now go and find me profit, and the more, the better.

It's like - here's a horse for you, Ivan, here's a sword and a shield, and now go and bring me the firebird, rumor has it - somewhere there is, beyond the sea, beyond the ocean. If you don't find it, I'll give you an axe. At least Ivan had the Little Humpbacked Horse, who knew everything, but the MoD is only able to find something and asks - Well, what is it? The Firebird, no? - No. All right, let's look some more.

Still, it would be nice to give Ivan at least some preliminary information, such as in Bukhara, or in India by the Shah of such and such. There are only two places to visit. And the Ministry of Defense is also not bad, there are fewer options, and the task is more specifically formulated.

 
Yuriy Asaulenko:

You have a box, inside which a kind of very hilly landscape is created. We throw a lot of balls in there (that's what sids are), and our job is to make sure that most of the balls hit the deepest hollows. This will be the learning, and this is the principle by which learning in MO is arranged.

1. If we jiggle the box slightly, most of the balls will not be able to leave the hollows where they originally hit - learning will not happen.

2. If we shake the box hard, some of the balls have a chance to get and stay only in the deepest hollows, but the shallower ones will remain unfilled, because the balls will pop out of there. Full learning will not happen.

3. If we shake the box with medium force, only the deepest and middle troughs will be filled, but the rest of the balls will not find anything and will continue to randomly bounce around the box. The training is better than in 1 and 2, but it's also not ace.

There are always settings in the learning methods - exactly how and when to shake the box to get the most effective learning.

If the different "sids" don't add up, then either something is wrong with the learning algorithm - you shake it wrong, or there are no deep troughs to latch on to in our box.

Beautifully stated, but I'm not sure it's the same in boosting as in NS (randomly adjusting weights in neurons at the start of training), I couldn't find exact information about the implementation. And in any case, the forced throwing of balls to different points can be better, including that it allows you to compare models when you change other settings. The only thing I do not understand the range of this seed...

 
Maxim Dmitrievsky:

For example, there is a graph, and what should I say? Here's where you look for profit, and don't look for it because I don't like it, I have bad associations with it.

Exactly.)) That's exactly what you need to say. And the more the better. We're probably not just sitting in the market for years, something we already know: if you go to the right - you lose a horse, etc.

And in general, where would anyone come to, if they would start from scratch, without using the knowledge and experience of previous generations. And we make the MI do exactly that.

 
Maxim Dmitrievsky:

And it will say - if you're so smart, take it yourself and trade without me.

I added it there.

 
Yuriy Asaulenko:

Still, it would be a good idea for Ivan to give at least some preliminary information, like in Bukhara or in India with the Shah of such and such. There are only two places to go. And the Ministry of Defense is also not bad, there are fewer options, and the task is more specifically formulated.

I'm thinking about implementation, when there will be post-processing of the model on the result of trade balance - the goal - to get rid of false ideas about the market, if possible. But all these ideas should be coded, it takes too long unfortunately.

 
Maxim Dmitrievsky:

Yet alfastar beats pro-gamers at starcraft, chess, and go, with only a month of training (or less, I forget), which is equivalent to ~200 years of pro player experience

We don't know the training methodology.) The initial conditions and the problem statement is always there.

 
Yuriy Asaulenko:

I, however, do not understand something in your MO. I get the impression that you simply give the MO a set of data-predictors, etc., and say, "Now go and find me profit, and the more, the better.

It's like - here's a horse for you, Ivan, here's a sword and a shield, and now go and bring me the firebird, rumor has it - somewhere there is, beyond the sea, beyond the ocean. If you don't find it, I'll give you an axe. At least Ivan had the Little Humpbacked Horse, who knew everything, but the MoD is only able to find something and asks - Well, what is it? The Firebird, no? - No. All right, let's look some more.

Still, it would be nice to give Ivan at least some preliminary information, such as in Bukhara, or in India by the Shah of such and such. There are only two places to visit. And the Ministry of Defense is also not bad, there are less options, and the task is more specifically formulated.

Although by name I may be associated with the main character of your tale, but not in essence, because I just suggest to take into account in MO search of profit maximum additional information from trader's experience, for example in my topic with templates -https://www.mql5.com/ru/forum/270216
Машинное обучение роботов
Машинное обучение роботов
  • 2018.08.02
  • www.mql5.com
Привет всем, я занимаюсь машинным обучением (МО) советников и индикаторов и решил вынести на всеобщее обсуждение свои эксперименты...
 
Maxim Dmitrievsky:

I know, exactly the same way I teach bots, with varying success so far (little experience)

For example: the bot himself learned to trade by trial and error, for about 4 minutes. On the right is training on the left is new data

No a priori knowledge has been given to it

There are unique developments of bots with artificial intelligence in the queue, which will conquer not only the market but the whole world

Learning on the right is not quite logical. If we are looking for some information in quotes (arguments) which affects the future price (function), then the learning should always be on the left, otherwise we get an inverse problem, like finding arguments by function:)
 
Ivan Negreshniy:

If your data and model are adequate, then, in theory, the result should worsen.

Why? Not only the stakes are interesting, but also the rationale.

Reason: