Machine learning in trading: theory, models, practice and algo-trading - page 1328

 
Maxim Dmitrievsky:

it looks like we need to pump R instead of python, Renat wrote that soon there will be a direct link without crutches

i.e. katbust will be able to run in one line from mt5.

Renat didn't say anything like that. Gifts are just more of another libs, that's all.
He was talking about the horror-horror of connecting R. It was.
 
Aleksey Nikolayev:

Has anything specific come to light? I only saw his post about "gifts," no details.

What could be more specific? I think it's definitely not candy and cookies.

 
Maxim Dmitrievsky:

I am purely interested in the depth of the rabbit hole for research purposes

You can't tell. Very deep, you can't reach it or even see it.

Maxim Dmitrievsky:

This may be something incomprehensible and strange to many. But in the end you all will come to bruteforce, without all this stupid construction of features, targets and other wild childish stuff. And then there will be a desire to discuss only the details of who has how it is implemented.

All that concerns traits, target and subsamples do not want to discuss any more.

I will respond only to sensible ideas on the "right" bruteforce.

In the end, everyone will come to the bruteforce. But bruteforce can be reduced at least several times, just all sorts of signs, subsamples, and other things. In many problems, part of the solution is already known, there is already some preliminary information about the solution, and using it can significantly reduce bruteforcing. That's actually what I was doing back in November 17).

One of these days I will start again, already in Python. It's time to make a new version.

 
Yuriy Asaulenko:

You can't tell. Och. deep, you can not reach or even see.

In the end, everyone will come to the bruteforce. But bruteforcing can be reduced at least several times, just by all sorts of features, subsamples, and other things. In many problems, part of the solution is already known, there is already some preliminary information about the solution, and using it can significantly reduce bruteforcing. That's actually what I was doing back in November 17).

One of these days I will start again, already in Python. It's time to make a new version.

I think that there is an infinite number of variations, there is no such thing that one trait is better than another, everything is relative to the target.

it's clear that if signs are formed from a market or news or other instruments it's one thing, but when they are formed from an accidental process (BP) it's another thing

 
Aleksey Vyazmikin:

for example, a house from a child's drawing, the NS will take a very long time to rummage around, while the tree will quickly determine the coordinates of this object and simply turn them into vectors. But if you change the scale, the tree will not be able to determine this house, but the NS should determine it.

NS do not like scaling very much. Trained in the price range - 100-120, the price goes beyond - that's it, abort. I just divide everything related to the price by the price itself, subtract one from it, and then use ratios to drive the variables into the desired dynamic range.

 
Maxim Dmitrievsky:

I think that there is an infinite number of variations, there is no such thing that one trait is better than another, everything is relative to the target

it is clear that if signs are formed from the market or news or other instruments it's one thing, but when they are formed from an accidental process (BP) it's another thing

Well, for example, we know that if the MA goes up, it is better not to go short. Long, understandably, is in question. The learning sample for longs is immediately reduced by half, due to the exclusion of , "where it is better not to go". All sorts of such prerequisites can be invented quite a lot, and reduce the amount of training (brutforce) in several times. In real work, we also conduct a similar selection, and the data, where it is better not to go in the NS simply do not get. In addition, we do not load the NS with all sorts of nonsense.

 
Yuriy Asaulenko:

Well, for example, we know that if the MA goes up, it is better not to go short. Longs, understandably, are in question. The training sample for longs is immediately reduced by half, due to the exclusion of "where it is better not to go". All sorts of such prerequisites can be invented quite a lot, and reduce the amount of training (brutforce) in several times. In real work, we also conduct a similar selection, and data, where it is better not to go to the NS simply do not get.

well this is all artificial cases, in other situation these conditions will be met exactly the opposite

expert evaluation is added in the form of priors in one way or another, if it is adequate, otherwise what is the point

 
Maxim Dmitrievsky:

Well, these are all artificial cases, in another situation these conditions would be met exactly the opposite

This is how I work. I do not teach the NS any nonsense, I pre-filter the bazaar. Why load the NS, if I can determine it without it? The NS is easier, resources are freed up for more subtle tasks, training time is if not reduced, then the quality of training increases.

I want to additionally try a pre-training on the artificial a la market signal. It is the next step in this direction. I wrote earlier.

 
Yuriy Asaulenko:

This is how I work. I don't teach the NS any bullshit, I filter the bullshit beforehand. Why burden the NS, if I can determine it without it? NS is easier, resources are freed up for more subtle tasks, training time, if not reduced, then the quality of training increases.

I want to additionally try a pre-training on the artificial a la market signal. This is the next step in that direction. I wrote earlier.

Again, we are talking about different approaches

You train with a teacher, because you initially use priors, I train without one.

 
Maxim Dmitrievsky:

Again, we are talking about different approaches

You have training with a teacher, because you initially lay the priors, I have without a teacher.

You can do everything the same without a teacher. I do not see the difference.

Imagine, a bunch of neurons are trained and solve a problem, which is solved by a couple or three if statements... NS brains are just clogged with this crap, and instead of thinking about beautiful things....))