Machine learning in trading: theory, models, practice and algo-trading - page 1610

 
mytarmailS:

Why?

The "richer" the model, the worse it is?

especially if you don't know yourself which combination of predictors is better, wouldn't it be correct to feed all possible options into the model and then look at the importance of predictors from the model's perspective

it doesn't work that way

 
mytarmailS:

Sorry, I meant the support and resistance levels

Initially there was no task to identify the support and resistance levels, respectively, they are not provided in the formalized training data. I have the simplest signal system based on patterns.

 
mytarmailS:

I have a theoretical question

We have a target function to which we will approach the model

there are predictors, let them be 1000 pcs.


So the question is: if we have a lot of predictors can we divide them into equal parts, let it be 100 pieces and train 10 models.

Then outputs of these 10 models are fed to the new model as predictors. Will it be the equivalent of one model originally trained on 1000 predictors at once?

Something tells me that no, but I want to hear opinions

The problem is that you don't know which predictors key in better with each other. You'll have to make a lot of different sets, as you correctly decided. I do something similar when I exclude indigenous predictors from the tree model. If the existing models themselves would be effective, then merging them might improve the overall result - again, I do this with leaves in the form of grouping them together.

 
Maxim Dmitrievsky:


Hello, Max!

It's good to have you back... Have you seen a warlock anywhere? Bored...

 
Alexander_K2:

Hello, Max!

It's good to have you back... Have you seen a warlock anywhere? Boring...

Hey, is he the one with the long beard in line for a free lunch? I think I saw him yesterday.

 
Maxim Dmitrievsky:

Hey, is that the guy with the long beard in line for the free lunch? I think I saw him yesterday.

:))) What do you think about fxsaber's research, which confirms the dominant role of time in the market? Are you continuing your research? I have a paradox so far - in tests everything is great, but in practice it looks like crap... I am still working it out.

 
Alexander_K2:

:))) What do you think about fxsaber's research, which confirms the dominant role of time in the market? Are you continuing your research? I have a paradox so far - in tests everything is great, but in practice it looks like crap... I'm still figuring it out.

I do not follow, is there a link?

I haven't done anything yet.
 
Maxim Dmitrievsky:

I don't follow, is there a link?

I have not done anything yet.

Here's the first one I found:

Forum on Trading, Automated Trading Systems and Testing Trading Strategies

Some signs of a proper TS

fxsaber, 2020.03.05 13:02

I want to clarify that we are considering the TS, which inputs only bid/ask/time_msc series. There is nothing else. And this TS is tuned through the optimizer.

There were also similar posts there, but - too lazy to search...

 
Maxim Dmitrievsky:

I haven't done anything yet.

Max, don't scare me that you've left forex altogether... That would be very sad... Everything is just beginning :))

 
mytarmailS:

I have a theoretical question

We have a target function to which we will approach the model

there are predictors, let them be 1000 pcs.


So the question is: if we have a lot of predictors can we divide them into equal parts, let it be 100 pieces and train 10 models.

Then outputs of these 10 models are fed to the new model as predictors. Will it be the equivalent of one model originally trained on 1000 predictors at once?

Something tells me that no, but I want to hear opinions

If by the word predictors you mean chips, then I think in the general case it will not be equivalent, it depends on how you divide the chips. Most likely due to lack of data model that theoretically could be trained by 1000 will not be trained by 100.
It is unclear why we should do so, the chips are chosen on the basis of giving the model a minimally sufficient set of data. Since it was originally conceived as minimal, how to divide it later?
Reason: